Test Report: Docker_Linux_crio 20033

                    
                      ff5f503981c4fd2196f1d2b6598014c1f7aaa64b:2024-12-02:37311
                    
                

Test fail (3/330)

Order failed test Duration
36 TestAddons/parallel/Ingress 151.98
38 TestAddons/parallel/MetricsServer 322.21
176 TestMultiControlPlane/serial/RestartCluster 125.99
x
+
TestAddons/parallel/Ingress (151.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-522394 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-522394 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-522394 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [65aad4ed-4e03-48b5-829b-e80b55f92706] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [65aad4ed-4e03-48b5-829b-e80b55f92706] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003184089s
I1202 11:33:52.844011   13299 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-522394 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-522394 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.013175865s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-522394 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-522394 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-522394
helpers_test.go:235: (dbg) docker inspect addons-522394:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f1156cea5e57d7643c4d052966bdd6ba07a406f81fdde58aebfdbcde723b215d",
	        "Created": "2024-12-02T11:31:02.743927926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 15359,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-02T11:31:02.885474163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a0bf2062289d31d12b734a031220306d830691a529a6eae8b4c8f4049e20571",
	        "ResolvConfPath": "/var/lib/docker/containers/f1156cea5e57d7643c4d052966bdd6ba07a406f81fdde58aebfdbcde723b215d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f1156cea5e57d7643c4d052966bdd6ba07a406f81fdde58aebfdbcde723b215d/hostname",
	        "HostsPath": "/var/lib/docker/containers/f1156cea5e57d7643c4d052966bdd6ba07a406f81fdde58aebfdbcde723b215d/hosts",
	        "LogPath": "/var/lib/docker/containers/f1156cea5e57d7643c4d052966bdd6ba07a406f81fdde58aebfdbcde723b215d/f1156cea5e57d7643c4d052966bdd6ba07a406f81fdde58aebfdbcde723b215d-json.log",
	        "Name": "/addons-522394",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-522394:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-522394",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/84d765ebcb16cc3968ac44d5bd8ac1c9a7e64095628155f57bbcff42e9990b07-init/diff:/var/lib/docker/overlay2/098fd1b37996620d1394051c0f2d145ec7cc4c66ec7f899bcd76f461df21801b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/84d765ebcb16cc3968ac44d5bd8ac1c9a7e64095628155f57bbcff42e9990b07/merged",
	                "UpperDir": "/var/lib/docker/overlay2/84d765ebcb16cc3968ac44d5bd8ac1c9a7e64095628155f57bbcff42e9990b07/diff",
	                "WorkDir": "/var/lib/docker/overlay2/84d765ebcb16cc3968ac44d5bd8ac1c9a7e64095628155f57bbcff42e9990b07/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-522394",
	                "Source": "/var/lib/docker/volumes/addons-522394/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-522394",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-522394",
	                "name.minikube.sigs.k8s.io": "addons-522394",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a35dc827c374d8139c79717941793e6398ff4a537867124a125cb8259705dcb",
	            "SandboxKey": "/var/run/docker/netns/2a35dc827c37",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-522394": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d38ec22788b795d8d65d951fb8091f29e0367d83fb60ea07791faa029050205d",
	                    "EndpointID": "418b55a9a55b19a3b61941492fd0c85aa323b6e304ba517b05f82505b61c0932",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-522394",
	                        "f1156cea5e57"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-522394 -n addons-522394
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-522394 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-522394 logs -n 25: (1.182041813s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-386345                                                                     | download-only-386345   | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
	| start   | --download-only -p                                                                          | download-docker-535118 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC |                     |
	|         | download-docker-535118                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-535118                                                                   | download-docker-535118 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-422651   | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC |                     |
	|         | binary-mirror-422651                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43737                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-422651                                                                     | binary-mirror-422651   | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
	| addons  | disable dashboard -p                                                                        | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC |                     |
	|         | addons-522394                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC |                     |
	|         | addons-522394                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-522394 --wait=true                                                                | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:33 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-522394 addons disable                                                                | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-522394 addons disable                                                                | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | -p addons-522394                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-522394 addons                                                                        | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-522394 addons disable                                                                | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-522394 ip                                                                            | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	| addons  | addons-522394 addons disable                                                                | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-522394 addons disable                                                                | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-522394 ssh curl -s                                                                   | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ssh     | addons-522394 ssh cat                                                                       | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | /opt/local-path-provisioner/pvc-8f0db6fc-4610-41c7-b84f-75a28b3ebb7d_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-522394 addons disable                                                                | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-522394 addons disable                                                                | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:34 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-522394 addons                                                                        | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-522394 addons                                                                        | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:34 UTC | 02 Dec 24 11:34 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-522394 addons                                                                        | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:34 UTC | 02 Dec 24 11:34 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-522394 addons                                                                        | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:34 UTC | 02 Dec 24 11:34 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-522394 ip                                                                            | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:36 UTC | 02 Dec 24 11:36 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 11:30:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 11:30:38.448354   14602 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:30:38.448475   14602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:30:38.448485   14602 out.go:358] Setting ErrFile to fd 2...
	I1202 11:30:38.448490   14602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:30:38.448693   14602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6540/.minikube/bin
	I1202 11:30:38.449265   14602 out.go:352] Setting JSON to false
	I1202 11:30:38.450170   14602 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":789,"bootTime":1733138249,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 11:30:38.450278   14602 start.go:139] virtualization: kvm guest
	I1202 11:30:38.452636   14602 out.go:177] * [addons-522394] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 11:30:38.454356   14602 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 11:30:38.454359   14602 notify.go:220] Checking for updates...
	I1202 11:30:38.457297   14602 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 11:30:38.458908   14602 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6540/kubeconfig
	I1202 11:30:38.460359   14602 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6540/.minikube
	I1202 11:30:38.461756   14602 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 11:30:38.463240   14602 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 11:30:38.464726   14602 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 11:30:38.486735   14602 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1202 11:30:38.486831   14602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 11:30:38.531909   14602 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-02 11:30:38.523449877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 11:30:38.531999   14602 docker.go:318] overlay module found
	I1202 11:30:38.534142   14602 out.go:177] * Using the docker driver based on user configuration
	I1202 11:30:38.535539   14602 start.go:297] selected driver: docker
	I1202 11:30:38.535555   14602 start.go:901] validating driver "docker" against <nil>
	I1202 11:30:38.535565   14602 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 11:30:38.536412   14602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 11:30:38.581147   14602 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-02 11:30:38.572979487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 11:30:38.581348   14602 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1202 11:30:38.581621   14602 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:30:38.583502   14602 out.go:177] * Using Docker driver with root privileges
	I1202 11:30:38.584779   14602 cni.go:84] Creating CNI manager for ""
	I1202 11:30:38.584858   14602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 11:30:38.584869   14602 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 11:30:38.584937   14602 start.go:340] cluster config:
	{Name:addons-522394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-522394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:30:38.586374   14602 out.go:177] * Starting "addons-522394" primary control-plane node in "addons-522394" cluster
	I1202 11:30:38.587531   14602 cache.go:121] Beginning downloading kic base image for docker with crio
	I1202 11:30:38.588692   14602 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1202 11:30:38.589859   14602 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:30:38.589886   14602 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1202 11:30:38.589915   14602 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 11:30:38.589927   14602 cache.go:56] Caching tarball of preloaded images
	I1202 11:30:38.590032   14602 preload.go:172] Found /home/jenkins/minikube-integration/20033-6540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:30:38.590045   14602 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:30:38.590417   14602 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/config.json ...
	I1202 11:30:38.590442   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/config.json: {Name:mk4bd885db87af2c06fd1da748cdd3f6e169fab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:30:38.605457   14602 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1202 11:30:38.605602   14602 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1202 11:30:38.605628   14602 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1202 11:30:38.605638   14602 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1202 11:30:38.605651   14602 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1202 11:30:38.605661   14602 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from local cache
	I1202 11:30:50.373643   14602 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from cached tarball
	I1202 11:30:50.373681   14602 cache.go:194] Successfully downloaded all kic artifacts
	I1202 11:30:50.373730   14602 start.go:360] acquireMachinesLock for addons-522394: {Name:mke96f53f0edd6a6d51035c4d22fed40662473b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:30:50.373856   14602 start.go:364] duration metric: took 86.059µs to acquireMachinesLock for "addons-522394"
	I1202 11:30:50.373892   14602 start.go:93] Provisioning new machine with config: &{Name:addons-522394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-522394 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:30:50.373977   14602 start.go:125] createHost starting for "" (driver="docker")
	I1202 11:30:50.376042   14602 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1202 11:30:50.376367   14602 start.go:159] libmachine.API.Create for "addons-522394" (driver="docker")
	I1202 11:30:50.376408   14602 client.go:168] LocalClient.Create starting
	I1202 11:30:50.376529   14602 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem
	I1202 11:30:50.529702   14602 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/cert.pem
	I1202 11:30:50.716354   14602 cli_runner.go:164] Run: docker network inspect addons-522394 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 11:30:50.732411   14602 cli_runner.go:211] docker network inspect addons-522394 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 11:30:50.732483   14602 network_create.go:284] running [docker network inspect addons-522394] to gather additional debugging logs...
	I1202 11:30:50.732508   14602 cli_runner.go:164] Run: docker network inspect addons-522394
	W1202 11:30:50.748560   14602 cli_runner.go:211] docker network inspect addons-522394 returned with exit code 1
	I1202 11:30:50.748593   14602 network_create.go:287] error running [docker network inspect addons-522394]: docker network inspect addons-522394: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-522394 not found
	I1202 11:30:50.748606   14602 network_create.go:289] output of [docker network inspect addons-522394]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-522394 not found
	
	** /stderr **
	I1202 11:30:50.748739   14602 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 11:30:50.765326   14602 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cda8f0}
	I1202 11:30:50.765369   14602 network_create.go:124] attempt to create docker network addons-522394 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1202 11:30:50.765416   14602 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-522394 addons-522394
	I1202 11:30:50.824188   14602 network_create.go:108] docker network addons-522394 192.168.49.0/24 created
	I1202 11:30:50.824218   14602 kic.go:121] calculated static IP "192.168.49.2" for the "addons-522394" container
	I1202 11:30:50.824296   14602 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 11:30:50.839741   14602 cli_runner.go:164] Run: docker volume create addons-522394 --label name.minikube.sigs.k8s.io=addons-522394 --label created_by.minikube.sigs.k8s.io=true
	I1202 11:30:50.857124   14602 oci.go:103] Successfully created a docker volume addons-522394
	I1202 11:30:50.857208   14602 cli_runner.go:164] Run: docker run --rm --name addons-522394-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-522394 --entrypoint /usr/bin/test -v addons-522394:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib
	I1202 11:30:57.970270   14602 cli_runner.go:217] Completed: docker run --rm --name addons-522394-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-522394 --entrypoint /usr/bin/test -v addons-522394:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib: (7.1130193s)
	I1202 11:30:57.970302   14602 oci.go:107] Successfully prepared a docker volume addons-522394
	I1202 11:30:57.970321   14602 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:30:57.970343   14602 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 11:30:57.970408   14602 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20033-6540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-522394:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir
	I1202 11:31:02.677696   14602 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20033-6540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-522394:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.707240494s)
	I1202 11:31:02.677724   14602 kic.go:203] duration metric: took 4.707379075s to extract preloaded images to volume ...
	W1202 11:31:02.677851   14602 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1202 11:31:02.677943   14602 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 11:31:02.728955   14602 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-522394 --name addons-522394 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-522394 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-522394 --network addons-522394 --ip 192.168.49.2 --volume addons-522394:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615
	I1202 11:31:03.051385   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Running}}
	I1202 11:31:03.069862   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:03.089267   14602 cli_runner.go:164] Run: docker exec addons-522394 stat /var/lib/dpkg/alternatives/iptables
	I1202 11:31:03.133462   14602 oci.go:144] the created container "addons-522394" has a running status.
	I1202 11:31:03.133488   14602 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa...
	I1202 11:31:03.205441   14602 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 11:31:03.226382   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:03.243035   14602 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 11:31:03.243057   14602 kic_runner.go:114] Args: [docker exec --privileged addons-522394 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 11:31:03.283614   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:03.306449   14602 machine.go:93] provisionDockerMachine start ...
	I1202 11:31:03.306537   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:03.324692   14602 main.go:141] libmachine: Using SSH client type: native
	I1202 11:31:03.324969   14602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1202 11:31:03.324988   14602 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 11:31:03.325641   14602 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53152->127.0.0.1:32768: read: connection reset by peer
	I1202 11:31:06.460018   14602 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-522394
	
	I1202 11:31:06.460054   14602 ubuntu.go:169] provisioning hostname "addons-522394"
	I1202 11:31:06.460119   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:06.476971   14602 main.go:141] libmachine: Using SSH client type: native
	I1202 11:31:06.477195   14602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1202 11:31:06.477211   14602 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-522394 && echo "addons-522394" | sudo tee /etc/hostname
	I1202 11:31:06.610900   14602 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-522394
	
	I1202 11:31:06.610971   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:06.627751   14602 main.go:141] libmachine: Using SSH client type: native
	I1202 11:31:06.627915   14602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1202 11:31:06.627932   14602 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-522394' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-522394/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-522394' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:31:06.752451   14602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:31:06.752475   14602 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6540/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6540/.minikube}
	I1202 11:31:06.752494   14602 ubuntu.go:177] setting up certificates
	I1202 11:31:06.752508   14602 provision.go:84] configureAuth start
	I1202 11:31:06.752566   14602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-522394
	I1202 11:31:06.769244   14602 provision.go:143] copyHostCerts
	I1202 11:31:06.769343   14602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6540/.minikube/ca.pem (1078 bytes)
	I1202 11:31:06.769463   14602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6540/.minikube/cert.pem (1123 bytes)
	I1202 11:31:06.769519   14602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6540/.minikube/key.pem (1679 bytes)
	I1202 11:31:06.769568   14602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca-key.pem org=jenkins.addons-522394 san=[127.0.0.1 192.168.49.2 addons-522394 localhost minikube]
	I1202 11:31:07.084157   14602 provision.go:177] copyRemoteCerts
	I1202 11:31:07.084212   14602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:31:07.084248   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:07.101169   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:07.196572   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 11:31:07.218457   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 11:31:07.239828   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 11:31:07.261963   14602 provision.go:87] duration metric: took 509.439437ms to configureAuth
	I1202 11:31:07.262000   14602 ubuntu.go:193] setting minikube options for container-runtime
	I1202 11:31:07.262203   14602 config.go:182] Loaded profile config "addons-522394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:31:07.262326   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:07.279559   14602 main.go:141] libmachine: Using SSH client type: native
	I1202 11:31:07.279733   14602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1202 11:31:07.279747   14602 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:31:07.490475   14602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:31:07.490504   14602 machine.go:96] duration metric: took 4.184034941s to provisionDockerMachine
	I1202 11:31:07.490520   14602 client.go:171] duration metric: took 17.114098916s to LocalClient.Create
	I1202 11:31:07.490543   14602 start.go:167] duration metric: took 17.114178962s to libmachine.API.Create "addons-522394"
	I1202 11:31:07.490554   14602 start.go:293] postStartSetup for "addons-522394" (driver="docker")
	I1202 11:31:07.490568   14602 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:31:07.490632   14602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:31:07.490684   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:07.507554   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:07.600753   14602 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:31:07.603609   14602 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 11:31:07.603637   14602 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1202 11:31:07.603645   14602 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1202 11:31:07.603652   14602 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1202 11:31:07.603662   14602 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6540/.minikube/addons for local assets ...
	I1202 11:31:07.603715   14602 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6540/.minikube/files for local assets ...
	I1202 11:31:07.603745   14602 start.go:296] duration metric: took 113.184134ms for postStartSetup
	I1202 11:31:07.603993   14602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-522394
	I1202 11:31:07.620607   14602 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/config.json ...
	I1202 11:31:07.620846   14602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 11:31:07.620881   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:07.637027   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:07.724920   14602 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 11:31:07.729164   14602 start.go:128] duration metric: took 17.355169836s to createHost
	I1202 11:31:07.729191   14602 start.go:83] releasing machines lock for "addons-522394", held for 17.35531727s
	I1202 11:31:07.729268   14602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-522394
	I1202 11:31:07.745687   14602 ssh_runner.go:195] Run: cat /version.json
	I1202 11:31:07.745755   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:07.745770   14602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:31:07.745823   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:07.763513   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:07.763804   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:07.926280   14602 ssh_runner.go:195] Run: systemctl --version
	I1202 11:31:07.930164   14602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:31:08.064967   14602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 11:31:08.069163   14602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:31:08.087042   14602 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1202 11:31:08.087170   14602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:31:08.113184   14602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1202 11:31:08.113205   14602 start.go:495] detecting cgroup driver to use...
	I1202 11:31:08.113235   14602 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 11:31:08.113270   14602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:31:08.126531   14602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:31:08.136665   14602 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:31:08.136710   14602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:31:08.148927   14602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:31:08.162040   14602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:31:08.246630   14602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:31:08.321564   14602 docker.go:233] disabling docker service ...
	I1202 11:31:08.321614   14602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:31:08.337907   14602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:31:08.348550   14602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:31:08.426490   14602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:31:08.506554   14602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:31:08.516948   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:31:08.531489   14602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:31:08.531545   14602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:08.540668   14602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:31:08.540725   14602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:08.549633   14602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:08.558699   14602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:08.567611   14602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:31:08.576294   14602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:08.585011   14602 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:08.599023   14602 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:08.607899   14602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:31:08.615807   14602 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 11:31:08.615875   14602 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 11:31:08.629171   14602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:31:08.637882   14602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:31:08.716908   14602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:31:08.818193   14602 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:31:08.818269   14602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:31:08.821499   14602 start.go:563] Will wait 60s for crictl version
	I1202 11:31:08.821549   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:31:08.824914   14602 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:31:08.855819   14602 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1202 11:31:08.855920   14602 ssh_runner.go:195] Run: crio --version
	I1202 11:31:08.888826   14602 ssh_runner.go:195] Run: crio --version
	I1202 11:31:08.924228   14602 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1202 11:31:08.925675   14602 cli_runner.go:164] Run: docker network inspect addons-522394 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 11:31:08.942451   14602 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 11:31:08.945891   14602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:31:08.955847   14602 kubeadm.go:883] updating cluster {Name:addons-522394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-522394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 11:31:08.955958   14602 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:31:08.956004   14602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:31:09.022776   14602 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 11:31:09.022803   14602 crio.go:433] Images already preloaded, skipping extraction
	I1202 11:31:09.022851   14602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:31:09.053133   14602 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 11:31:09.053155   14602 cache_images.go:84] Images are preloaded, skipping loading
	I1202 11:31:09.053163   14602 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1202 11:31:09.053246   14602 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-522394 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-522394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:31:09.053314   14602 ssh_runner.go:195] Run: crio config
	I1202 11:31:09.095154   14602 cni.go:84] Creating CNI manager for ""
	I1202 11:31:09.095175   14602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 11:31:09.095185   14602 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 11:31:09.095205   14602 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-522394 NodeName:addons-522394 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 11:31:09.095322   14602 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-522394"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 11:31:09.095379   14602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:31:09.103556   14602 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 11:31:09.103620   14602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 11:31:09.111562   14602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 11:31:09.127615   14602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:31:09.143980   14602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1202 11:31:09.160304   14602 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 11:31:09.163655   14602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:31:09.173716   14602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:31:09.245329   14602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:31:09.257689   14602 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394 for IP: 192.168.49.2
	I1202 11:31:09.257724   14602 certs.go:194] generating shared ca certs ...
	I1202 11:31:09.257739   14602 certs.go:226] acquiring lock for ca certs: {Name:mkb9f54a1a5b06ba02335d6260145758dc70e4bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:09.257867   14602 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.key
	I1202 11:31:09.469731   14602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt ...
	I1202 11:31:09.469764   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt: {Name:mk4ae91dfc26d7153230fe2d9cab66a79015108a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:09.469961   14602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6540/.minikube/ca.key ...
	I1202 11:31:09.469973   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/ca.key: {Name:mkd438dac45f54961607e644fe9baf5d15ef9f63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:09.470048   14602 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.key
	I1202 11:31:09.594064   14602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.crt ...
	I1202 11:31:09.594101   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.crt: {Name:mk76b36b478ee22df66ae14e5403698cb715b005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:09.594292   14602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.key ...
	I1202 11:31:09.594305   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.key: {Name:mk26bd14e98e8bc68bd181b692e23db7a5175adf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:09.594386   14602 certs.go:256] generating profile certs ...
	I1202 11:31:09.594447   14602 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.key
	I1202 11:31:09.594472   14602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt with IP's: []
	I1202 11:31:09.751460   14602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt ...
	I1202 11:31:09.751496   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: {Name:mk2e66010c7db27c0dace19df49014b2d0afb6c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:09.751672   14602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.key ...
	I1202 11:31:09.751680   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.key: {Name:mkca5e7485085a3453a54b4745cfc443fdaeaf21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:09.751755   14602 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.key.2a96479c
	I1202 11:31:09.751773   14602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.crt.2a96479c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1202 11:31:10.101761   14602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.crt.2a96479c ...
	I1202 11:31:10.101789   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.crt.2a96479c: {Name:mkcc418856c5eb273401a18c95c72fba1024ade2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:10.101937   14602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.key.2a96479c ...
	I1202 11:31:10.101950   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.key.2a96479c: {Name:mk534a25778866ef4232d4034b1d474493bfe2ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:10.102019   14602 certs.go:381] copying /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.crt.2a96479c -> /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.crt
	I1202 11:31:10.102094   14602 certs.go:385] copying /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.key.2a96479c -> /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.key
	I1202 11:31:10.102139   14602 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/proxy-client.key
	I1202 11:31:10.102156   14602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/proxy-client.crt with IP's: []
	I1202 11:31:10.432510   14602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/proxy-client.crt ...
	I1202 11:31:10.432548   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/proxy-client.crt: {Name:mkb33877c82f9fd153d1621054c6f7a99b6da53e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:10.432729   14602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/proxy-client.key ...
	I1202 11:31:10.432740   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/proxy-client.key: {Name:mk635a5916d42202e3b8acae8ce56111092d49f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:10.432911   14602 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:31:10.432946   14602 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem (1078 bytes)
	I1202 11:31:10.432971   14602 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:31:10.432999   14602 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/key.pem (1679 bytes)
	I1202 11:31:10.433625   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:31:10.455890   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:31:10.477324   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:31:10.498778   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 11:31:10.519811   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 11:31:10.540521   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 11:31:10.561394   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 11:31:10.582528   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 11:31:10.603242   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:31:10.623859   14602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 11:31:10.639406   14602 ssh_runner.go:195] Run: openssl version
	I1202 11:31:10.644391   14602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:31:10.652798   14602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:31:10.655792   14602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:31:10.655836   14602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:31:10.662042   14602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:31:10.670503   14602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:31:10.673440   14602 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 11:31:10.673489   14602 kubeadm.go:392] StartCluster: {Name:addons-522394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-522394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:31:10.673575   14602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 11:31:10.673637   14602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 11:31:10.704726   14602 cri.go:89] found id: ""
	I1202 11:31:10.704785   14602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 11:31:10.712701   14602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 11:31:10.720604   14602 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1202 11:31:10.720651   14602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 11:31:10.728353   14602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 11:31:10.728371   14602 kubeadm.go:157] found existing configuration files:
	
	I1202 11:31:10.728405   14602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 11:31:10.735841   14602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 11:31:10.735885   14602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 11:31:10.743324   14602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 11:31:10.750796   14602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 11:31:10.750852   14602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 11:31:10.758231   14602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 11:31:10.765683   14602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 11:31:10.765733   14602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 11:31:10.772915   14602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 11:31:10.780474   14602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 11:31:10.780536   14602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 11:31:10.787828   14602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 11:31:10.840908   14602 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1071-gcp\n", err: exit status 1
	I1202 11:31:10.892595   14602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 11:31:20.085625   14602 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1202 11:31:20.085707   14602 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 11:31:20.085878   14602 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1202 11:31:20.085980   14602 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1071-gcp
	I1202 11:31:20.086026   14602 kubeadm.go:310] OS: Linux
	I1202 11:31:20.086094   14602 kubeadm.go:310] CGROUPS_CPU: enabled
	I1202 11:31:20.086153   14602 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1202 11:31:20.086237   14602 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1202 11:31:20.086305   14602 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1202 11:31:20.086372   14602 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1202 11:31:20.086427   14602 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1202 11:31:20.086467   14602 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1202 11:31:20.086511   14602 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1202 11:31:20.086551   14602 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1202 11:31:20.086616   14602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 11:31:20.086695   14602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 11:31:20.086783   14602 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 11:31:20.086872   14602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 11:31:20.088662   14602 out.go:235]   - Generating certificates and keys ...
	I1202 11:31:20.088745   14602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 11:31:20.088805   14602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 11:31:20.088883   14602 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 11:31:20.088968   14602 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1202 11:31:20.089050   14602 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1202 11:31:20.089131   14602 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1202 11:31:20.089229   14602 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1202 11:31:20.089416   14602 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-522394 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 11:31:20.089492   14602 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1202 11:31:20.089625   14602 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-522394 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 11:31:20.089704   14602 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 11:31:20.089767   14602 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 11:31:20.089825   14602 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1202 11:31:20.089900   14602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 11:31:20.089944   14602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 11:31:20.090018   14602 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 11:31:20.090128   14602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 11:31:20.090237   14602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 11:31:20.090334   14602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 11:31:20.090453   14602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 11:31:20.090512   14602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 11:31:20.092021   14602 out.go:235]   - Booting up control plane ...
	I1202 11:31:20.092123   14602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 11:31:20.092189   14602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 11:31:20.092252   14602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 11:31:20.092380   14602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 11:31:20.092461   14602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 11:31:20.092522   14602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 11:31:20.092694   14602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 11:31:20.092846   14602 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 11:31:20.092911   14602 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001512105s
	I1202 11:31:20.092976   14602 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1202 11:31:20.093027   14602 kubeadm.go:310] [api-check] The API server is healthy after 4.002254275s
	I1202 11:31:20.093116   14602 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 11:31:20.093240   14602 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 11:31:20.093311   14602 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 11:31:20.093517   14602 kubeadm.go:310] [mark-control-plane] Marking the node addons-522394 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 11:31:20.093604   14602 kubeadm.go:310] [bootstrap-token] Using token: eeeqcp.fral8wgnp9vy03i0
	I1202 11:31:20.095084   14602 out.go:235]   - Configuring RBAC rules ...
	I1202 11:31:20.095188   14602 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 11:31:20.095277   14602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 11:31:20.095437   14602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 11:31:20.095553   14602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 11:31:20.095662   14602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 11:31:20.095759   14602 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 11:31:20.095891   14602 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 11:31:20.095958   14602 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1202 11:31:20.096035   14602 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1202 11:31:20.096046   14602 kubeadm.go:310] 
	I1202 11:31:20.096115   14602 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1202 11:31:20.096123   14602 kubeadm.go:310] 
	I1202 11:31:20.096214   14602 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1202 11:31:20.096224   14602 kubeadm.go:310] 
	I1202 11:31:20.096259   14602 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1202 11:31:20.096359   14602 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 11:31:20.096410   14602 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 11:31:20.096416   14602 kubeadm.go:310] 
	I1202 11:31:20.096478   14602 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1202 11:31:20.096488   14602 kubeadm.go:310] 
	I1202 11:31:20.096555   14602 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 11:31:20.096564   14602 kubeadm.go:310] 
	I1202 11:31:20.096639   14602 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1202 11:31:20.096748   14602 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 11:31:20.096846   14602 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 11:31:20.096855   14602 kubeadm.go:310] 
	I1202 11:31:20.096975   14602 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 11:31:20.097087   14602 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1202 11:31:20.097099   14602 kubeadm.go:310] 
	I1202 11:31:20.097231   14602 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token eeeqcp.fral8wgnp9vy03i0 \
	I1202 11:31:20.097384   14602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f7d4bd58f5eb8fb1f0363979e5ea4d5bcf2e37268538de75315f476aceafe2e5 \
	I1202 11:31:20.097415   14602 kubeadm.go:310] 	--control-plane 
	I1202 11:31:20.097421   14602 kubeadm.go:310] 
	I1202 11:31:20.097586   14602 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1202 11:31:20.097605   14602 kubeadm.go:310] 
	I1202 11:31:20.097721   14602 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token eeeqcp.fral8wgnp9vy03i0 \
	I1202 11:31:20.097862   14602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f7d4bd58f5eb8fb1f0363979e5ea4d5bcf2e37268538de75315f476aceafe2e5 
	I1202 11:31:20.097876   14602 cni.go:84] Creating CNI manager for ""
	I1202 11:31:20.097894   14602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 11:31:20.100524   14602 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1202 11:31:20.101719   14602 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 11:31:20.105483   14602 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1202 11:31:20.105497   14602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 11:31:20.122736   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 11:31:20.310027   14602 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 11:31:20.310147   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:20.310150   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-522394 minikube.k8s.io/updated_at=2024_12_02T11_31_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=addons-522394 minikube.k8s.io/primary=true
	I1202 11:31:20.404480   14602 ops.go:34] apiserver oom_adj: -16
	I1202 11:31:20.404481   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:20.904676   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:21.405592   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:21.905501   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:22.405118   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:22.905274   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:23.405210   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:23.905505   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:24.404953   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:24.904872   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:24.968651   14602 kubeadm.go:1113] duration metric: took 4.65856046s to wait for elevateKubeSystemPrivileges
	I1202 11:31:24.968693   14602 kubeadm.go:394] duration metric: took 14.295207991s to StartCluster
	I1202 11:31:24.968714   14602 settings.go:142] acquiring lock: {Name:mkd94da5b026832ad8b1eceae7944b5245757344 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:24.968820   14602 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6540/kubeconfig
	I1202 11:31:24.969361   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/kubeconfig: {Name:mk5ee3d9b6afe00d14254b3bb7ff913980280999 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:24.969698   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 11:31:24.970073   14602 config.go:182] Loaded profile config "addons-522394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:31:24.969863   14602 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:31:24.970200   14602 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1202 11:31:24.970316   14602 addons.go:69] Setting yakd=true in profile "addons-522394"
	I1202 11:31:24.970365   14602 addons.go:234] Setting addon yakd=true in "addons-522394"
	I1202 11:31:24.970410   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.970928   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:24.971013   14602 addons.go:69] Setting inspektor-gadget=true in profile "addons-522394"
	I1202 11:31:24.971053   14602 addons.go:234] Setting addon inspektor-gadget=true in "addons-522394"
	I1202 11:31:24.971092   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.971139   14602 addons.go:69] Setting storage-provisioner=true in profile "addons-522394"
	I1202 11:31:24.971167   14602 addons.go:234] Setting addon storage-provisioner=true in "addons-522394"
	I1202 11:31:24.971206   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.971524   14602 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-522394"
	I1202 11:31:24.971543   14602 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-522394"
	I1202 11:31:24.971734   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:24.971803   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:24.971952   14602 addons.go:69] Setting volumesnapshots=true in profile "addons-522394"
	I1202 11:31:24.971979   14602 addons.go:69] Setting ingress=true in profile "addons-522394"
	I1202 11:31:24.971996   14602 addons.go:234] Setting addon volumesnapshots=true in "addons-522394"
	I1202 11:31:24.971989   14602 addons.go:69] Setting ingress-dns=true in profile "addons-522394"
	I1202 11:31:24.972015   14602 addons.go:234] Setting addon ingress=true in "addons-522394"
	I1202 11:31:24.972015   14602 addons.go:234] Setting addon ingress-dns=true in "addons-522394"
	I1202 11:31:24.972030   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.972054   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.972064   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.972091   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:24.972517   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:24.972533   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:24.972551   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:24.972837   14602 addons.go:69] Setting default-storageclass=true in profile "addons-522394"
	I1202 11:31:24.972927   14602 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-522394"
	I1202 11:31:24.973022   14602 addons.go:69] Setting volcano=true in profile "addons-522394"
	I1202 11:31:24.973064   14602 addons.go:234] Setting addon volcano=true in "addons-522394"
	I1202 11:31:24.973090   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.973279   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:24.973518   14602 addons.go:69] Setting metrics-server=true in profile "addons-522394"
	I1202 11:31:24.973540   14602 addons.go:234] Setting addon metrics-server=true in "addons-522394"
	I1202 11:31:24.973718   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.973933   14602 addons.go:69] Setting cloud-spanner=true in profile "addons-522394"
	I1202 11:31:24.973958   14602 addons.go:234] Setting addon cloud-spanner=true in "addons-522394"
	I1202 11:31:24.974064   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.974209   14602 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-522394"
	I1202 11:31:24.974257   14602 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-522394"
	I1202 11:31:24.974288   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.974347   14602 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-522394"
	I1202 11:31:24.974390   14602 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-522394"
	I1202 11:31:24.974417   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.974562   14602 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-522394"
	I1202 11:31:24.974639   14602 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-522394"
	I1202 11:31:24.974669   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.976377   14602 out.go:177] * Verifying Kubernetes components...
	I1202 11:31:24.976434   14602 addons.go:69] Setting gcp-auth=true in profile "addons-522394"
	I1202 11:31:24.976471   14602 mustload.go:65] Loading cluster: addons-522394
	I1202 11:31:24.976503   14602 addons.go:69] Setting registry=true in profile "addons-522394"
	I1202 11:31:24.976517   14602 addons.go:234] Setting addon registry=true in "addons-522394"
	I1202 11:31:24.976554   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.976667   14602 config.go:182] Loaded profile config "addons-522394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:31:24.976916   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:24.977067   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:24.978174   14602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:31:25.007010   14602 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1202 11:31:25.007064   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:25.009062   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:25.009964   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:25.010195   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:25.010264   14602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1202 11:31:25.010770   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:25.011175   14602 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 11:31:25.012723   14602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1202 11:31:25.012848   14602 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 11:31:25.012857   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 11:31:25.012890   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.014528   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:25.015505   14602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1202 11:31:25.017689   14602 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 11:31:25.017713   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1202 11:31:25.017771   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.025206   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:25.026333   14602 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1202 11:31:25.027226   14602 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-522394"
	I1202 11:31:25.027271   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:25.027639   14602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1202 11:31:25.027658   14602 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1202 11:31:25.027713   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.027721   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:25.010381   14602 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1202 11:31:25.028373   14602 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1202 11:31:25.028420   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.029870   14602 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1202 11:31:25.031003   14602 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1202 11:31:25.031019   14602 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1202 11:31:25.031078   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.058764   14602 addons.go:234] Setting addon default-storageclass=true in "addons-522394"
	I1202 11:31:25.058814   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:25.059322   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:25.076138   14602 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1202 11:31:25.077606   14602 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 11:31:25.077628   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1202 11:31:25.077691   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.079352   14602 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1202 11:31:25.080781   14602 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 11:31:25.080800   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1202 11:31:25.080854   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.083563   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.084364   14602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1202 11:31:25.085116   14602 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1202 11:31:25.086249   14602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1202 11:31:25.086265   14602 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 11:31:25.086283   14602 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 11:31:25.086346   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.088475   14602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1202 11:31:25.088520   14602 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1202 11:31:25.090012   14602 out.go:177]   - Using image docker.io/registry:2.8.3
	I1202 11:31:25.092861   14602 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1202 11:31:25.092886   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1202 11:31:25.092939   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.094274   14602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1202 11:31:25.095262   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.096792   14602 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	W1202 11:31:25.097369   14602 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1202 11:31:25.099405   14602 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1202 11:31:25.104557   14602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1202 11:31:25.106712   14602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1202 11:31:25.109295   14602 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1202 11:31:25.113070   14602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1202 11:31:25.113097   14602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1202 11:31:25.113158   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.117799   14602 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 11:31:25.117825   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1202 11:31:25.117887   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.121355   14602 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 11:31:25.121377   14602 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 11:31:25.121442   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.122272   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.124732   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.125907   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.133507   14602 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1202 11:31:25.135815   14602 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1202 11:31:25.136180   14602 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1202 11:31:25.136205   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1202 11:31:25.136278   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.144477   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.148374   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.150230   14602 out.go:177]   - Using image docker.io/busybox:stable
	I1202 11:31:25.151517   14602 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 11:31:25.151538   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1202 11:31:25.151594   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.160602   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.160811   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.162571   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.163771   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.165712   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.167710   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.177069   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.204311   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 11:31:25.408194   14602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:31:25.522853   14602 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1202 11:31:25.522884   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1202 11:31:25.523058   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 11:31:25.701660   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 11:31:25.702245   14602 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1202 11:31:25.702305   14602 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1202 11:31:25.710690   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1202 11:31:25.713077   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 11:31:25.716483   14602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1202 11:31:25.716509   14602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1202 11:31:25.718076   14602 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1202 11:31:25.718094   14602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1202 11:31:25.801825   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 11:31:25.802151   14602 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1202 11:31:25.802173   14602 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1202 11:31:25.803105   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1202 11:31:25.810173   14602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 11:31:25.810266   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1202 11:31:25.812388   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 11:31:25.911687   14602 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1202 11:31:25.911776   14602 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1202 11:31:25.918266   14602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1202 11:31:25.918292   14602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1202 11:31:25.918899   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 11:31:26.001725   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 11:31:26.003520   14602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1202 11:31:26.003594   14602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1202 11:31:26.018457   14602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 11:31:26.018484   14602 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 11:31:26.101997   14602 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1202 11:31:26.102048   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1202 11:31:26.316019   14602 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1202 11:31:26.316110   14602 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1202 11:31:26.410849   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1202 11:31:26.420301   14602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1202 11:31:26.420384   14602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1202 11:31:26.423105   14602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 11:31:26.423165   14602 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 11:31:26.506346   14602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1202 11:31:26.506451   14602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1202 11:31:26.511070   14602 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.306713595s)
	I1202 11:31:26.511158   14602 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1202 11:31:26.512423   14602 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.104198014s)
	I1202 11:31:26.513351   14602 node_ready.go:35] waiting up to 6m0s for node "addons-522394" to be "Ready" ...
	I1202 11:31:26.701685   14602 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1202 11:31:26.701766   14602 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1202 11:31:26.716987   14602 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1202 11:31:26.717013   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1202 11:31:26.801942   14602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1202 11:31:26.801990   14602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1202 11:31:26.818566   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 11:31:27.005561   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1202 11:31:27.100901   14602 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 11:31:27.100935   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1202 11:31:27.109529   14602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1202 11:31:27.109554   14602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1202 11:31:27.306082   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 11:31:27.614206   14602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1202 11:31:27.614283   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1202 11:31:27.702826   14602 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-522394" context rescaled to 1 replicas
	I1202 11:31:28.004344   14602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1202 11:31:28.004450   14602 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1202 11:31:28.216668   14602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1202 11:31:28.216896   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1202 11:31:28.418686   14602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1202 11:31:28.418775   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1202 11:31:28.810816   14602 node_ready.go:53] node "addons-522394" has status "Ready":"False"
	I1202 11:31:28.913851   14602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 11:31:28.913879   14602 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1202 11:31:29.119635   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 11:31:29.327064   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.803972764s)
	I1202 11:31:31.026963   14602 node_ready.go:53] node "addons-522394" has status "Ready":"False"
	I1202 11:31:31.204719   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.502962196s)
	I1202 11:31:31.204774   14602 addons.go:475] Verifying addon ingress=true in "addons-522394"
	I1202 11:31:31.204809   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.494078658s)
	I1202 11:31:31.204883   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.491780344s)
	I1202 11:31:31.204962   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.403052137s)
	I1202 11:31:31.205078   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.401900391s)
	I1202 11:31:31.205108   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.392648077s)
	I1202 11:31:31.205167   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.286247631s)
	I1202 11:31:31.205624   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.203781972s)
	I1202 11:31:31.205680   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.794733783s)
	I1202 11:31:31.205713   14602 addons.go:475] Verifying addon registry=true in "addons-522394"
	I1202 11:31:31.205971   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.200311107s)
	I1202 11:31:31.206109   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.387237663s)
	I1202 11:31:31.206138   14602 addons.go:475] Verifying addon metrics-server=true in "addons-522394"
	I1202 11:31:31.207542   14602 out.go:177] * Verifying registry addon...
	I1202 11:31:31.208077   14602 out.go:177] * Verifying ingress addon...
	I1202 11:31:31.208118   14602 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-522394 service yakd-dashboard -n yakd-dashboard
	
	I1202 11:31:31.209779   14602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1202 11:31:31.210924   14602 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1202 11:31:31.217586   14602 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 11:31:31.217608   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:31.217794   14602 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1202 11:31:31.217815   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1202 11:31:31.225831   14602 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1202 11:31:31.716035   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:31.815740   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:32.006766   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.700624245s)
	W1202 11:31:32.006820   14602 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 11:31:32.006845   14602 retry.go:31] will retry after 356.321884ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 11:31:32.214037   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:32.215538   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:32.234579   14602 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1202 11:31:32.234655   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:32.255563   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:32.364190   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 11:31:32.423627   14602 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1202 11:31:32.505853   14602 addons.go:234] Setting addon gcp-auth=true in "addons-522394"
	I1202 11:31:32.505914   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:32.506431   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:32.534628   14602 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1202 11:31:32.534691   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:32.554569   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:32.712615   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:32.713514   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:32.927114   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.807405531s)
	I1202 11:31:32.927161   14602 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-522394"
	I1202 11:31:32.928923   14602 out.go:177] * Verifying csi-hostpath-driver addon...
	I1202 11:31:32.931262   14602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1202 11:31:32.934553   14602 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 11:31:32.934577   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:33.213897   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:33.214155   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:33.434406   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:33.516975   14602 node_ready.go:53] node "addons-522394" has status "Ready":"False"
	I1202 11:31:33.713933   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:33.714263   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:33.934730   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:34.213814   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:34.214241   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:34.434553   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:34.713375   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:34.714016   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:34.934217   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:35.177959   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.813722275s)
	I1202 11:31:35.178026   14602 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.643360252s)
	I1202 11:31:35.180020   14602 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1202 11:31:35.181470   14602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1202 11:31:35.182961   14602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1202 11:31:35.182974   14602 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1202 11:31:35.199564   14602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1202 11:31:35.199588   14602 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1202 11:31:35.213804   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:35.214534   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:35.217166   14602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 11:31:35.217183   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1202 11:31:35.234220   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 11:31:35.435108   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:35.555010   14602 addons.go:475] Verifying addon gcp-auth=true in "addons-522394"
	I1202 11:31:35.556578   14602 out.go:177] * Verifying gcp-auth addon...
	I1202 11:31:35.558569   14602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1202 11:31:35.560775   14602 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1202 11:31:35.560791   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:35.713370   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:35.713675   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:35.934138   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:36.016509   14602 node_ready.go:53] node "addons-522394" has status "Ready":"False"
	I1202 11:31:36.061872   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:36.213899   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:36.214539   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:36.434716   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:36.562284   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:36.713076   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:36.714048   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:36.934604   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:37.061201   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:37.212641   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:37.213771   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:37.434435   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:37.562719   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:37.713468   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:37.713849   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:37.934235   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:38.016725   14602 node_ready.go:53] node "addons-522394" has status "Ready":"False"
	I1202 11:31:38.062238   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:38.213059   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:38.214075   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:38.434598   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:38.562175   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:38.713301   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:38.714179   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:38.934730   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:39.061782   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:39.213469   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:39.213896   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:39.434162   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:39.561697   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:39.713351   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:39.713769   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:39.934438   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:40.016762   14602 node_ready.go:53] node "addons-522394" has status "Ready":"False"
	I1202 11:31:40.062061   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:40.212600   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:40.213543   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:40.434859   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:40.561610   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:40.713367   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:40.714523   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:40.935230   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:41.061727   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:41.213169   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:41.214420   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:41.434918   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:41.561870   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:41.713508   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:41.714010   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:41.934126   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:42.061977   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:42.212554   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:42.214781   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:42.434019   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:42.516388   14602 node_ready.go:53] node "addons-522394" has status "Ready":"False"
	I1202 11:31:42.561802   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:42.713572   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:42.714182   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:42.934304   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:43.061995   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:43.212434   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:43.213774   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:43.434140   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:43.562644   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:43.713262   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:43.714581   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:44.008705   14602 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 11:31:44.008730   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:44.016837   14602 node_ready.go:49] node "addons-522394" has status "Ready":"True"
	I1202 11:31:44.016864   14602 node_ready.go:38] duration metric: took 17.503456307s for node "addons-522394" to be "Ready" ...
	I1202 11:31:44.016878   14602 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:31:44.041239   14602 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:44.112563   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:44.215036   14602 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 11:31:44.215071   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:44.215789   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:44.437136   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:44.603481   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:44.715429   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:44.716398   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:44.935571   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:45.062158   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:45.214805   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:45.217998   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:45.504050   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:45.604227   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:45.714322   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:45.715787   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:45.937142   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:46.102974   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:46.104415   14602 pod_ready.go:103] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"False"
	I1202 11:31:46.214191   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:46.215578   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:46.437860   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:46.602154   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:46.716066   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:46.716762   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:46.937310   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:47.062195   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:47.213580   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:47.214485   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:47.435860   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:47.561253   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:47.714052   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:47.714745   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:47.935880   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:48.062054   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:48.214905   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:48.215271   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:48.436128   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:48.548022   14602 pod_ready.go:103] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"False"
	I1202 11:31:48.561794   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:48.714418   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:48.715273   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:48.935504   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:49.061481   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:49.214218   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:49.214831   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:49.436589   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:49.562125   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:49.713156   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:49.714498   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:49.936840   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:50.108903   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:50.213179   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:50.214208   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:50.436050   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:50.561903   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:50.713863   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:50.714188   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:50.936049   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:51.046915   14602 pod_ready.go:103] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"False"
	I1202 11:31:51.061681   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:51.213853   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:51.214659   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:51.436824   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:51.562283   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:51.713541   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:51.714444   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:51.936569   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:52.061743   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:52.214100   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:52.214726   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:52.436035   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:52.561723   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:52.713842   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:52.714685   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:52.935878   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:53.047351   14602 pod_ready.go:103] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"False"
	I1202 11:31:53.062598   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:53.213739   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:53.214686   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:53.436694   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:53.562896   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:53.714579   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:53.714810   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:53.934992   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:54.062153   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:54.213262   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:54.214134   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:54.435461   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:54.561658   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:54.714639   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:54.714967   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:54.936168   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:55.062577   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:55.215442   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:55.215975   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:55.436361   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:55.546811   14602 pod_ready.go:103] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"False"
	I1202 11:31:55.562552   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:55.713451   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:55.714338   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:56.020061   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:56.103072   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:56.213639   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:56.214768   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:56.436056   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:56.562683   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:56.713822   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:56.715072   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:56.936230   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:57.061811   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:57.213828   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:57.214563   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:57.435700   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:57.546921   14602 pod_ready.go:103] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"False"
	I1202 11:31:57.562389   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:57.713112   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:57.714229   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:57.936408   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:58.062040   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:58.214018   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:58.215358   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:58.435874   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:58.563100   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:58.715202   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:58.716010   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:58.935433   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:59.061816   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:59.213773   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:59.215048   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:59.434926   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:59.546955   14602 pod_ready.go:103] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"False"
	I1202 11:31:59.561122   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:59.713280   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:59.714776   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:59.936478   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:00.061346   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:00.213677   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:00.216994   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:00.435293   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:00.561341   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:00.713755   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:00.715082   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:00.935990   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:01.062489   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:01.213711   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:01.214825   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:01.435880   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:01.547735   14602 pod_ready.go:103] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:01.562195   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:01.713077   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:01.714091   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:01.934981   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:02.062075   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:02.214540   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:02.214914   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:02.435274   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:02.562648   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:02.713989   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:02.814825   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:02.935935   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:03.062334   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:03.213217   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:03.214411   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:03.435526   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:03.562115   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:03.713568   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:03.714592   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:03.936143   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:04.047803   14602 pod_ready.go:103] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:04.061771   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:04.214168   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:04.214771   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:04.435884   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:04.562369   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:04.713345   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:04.714259   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:04.935595   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:05.061759   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:05.213825   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:05.215396   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:05.434884   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:05.561366   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:05.713648   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:05.714685   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:05.936693   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:06.062127   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:06.213306   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:06.214041   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:06.435107   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:06.547169   14602 pod_ready.go:103] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:06.561596   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:06.713775   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:06.714562   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:06.935466   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:07.102201   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:07.213181   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:07.215700   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:07.435631   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:07.561955   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:07.714231   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:07.714832   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:07.936078   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:08.047329   14602 pod_ready.go:93] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"True"
	I1202 11:32:08.047358   14602 pod_ready.go:82] duration metric: took 24.006089019s for pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.047373   14602 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2cr8g" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.051655   14602 pod_ready.go:93] pod "coredns-7c65d6cfc9-2cr8g" in "kube-system" namespace has status "Ready":"True"
	I1202 11:32:08.051673   14602 pod_ready.go:82] duration metric: took 4.291677ms for pod "coredns-7c65d6cfc9-2cr8g" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.051691   14602 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-522394" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.055477   14602 pod_ready.go:93] pod "etcd-addons-522394" in "kube-system" namespace has status "Ready":"True"
	I1202 11:32:08.055498   14602 pod_ready.go:82] duration metric: took 3.800041ms for pod "etcd-addons-522394" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.055511   14602 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-522394" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.059136   14602 pod_ready.go:93] pod "kube-apiserver-addons-522394" in "kube-system" namespace has status "Ready":"True"
	I1202 11:32:08.059154   14602 pod_ready.go:82] duration metric: took 3.637196ms for pod "kube-apiserver-addons-522394" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.059163   14602 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-522394" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.060836   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:08.063086   14602 pod_ready.go:93] pod "kube-controller-manager-addons-522394" in "kube-system" namespace has status "Ready":"True"
	I1202 11:32:08.063105   14602 pod_ready.go:82] duration metric: took 3.935451ms for pod "kube-controller-manager-addons-522394" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.063118   14602 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7vj6f" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.213091   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:08.214184   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:08.435547   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:08.445381   14602 pod_ready.go:93] pod "kube-proxy-7vj6f" in "kube-system" namespace has status "Ready":"True"
	I1202 11:32:08.445405   14602 pod_ready.go:82] duration metric: took 382.279224ms for pod "kube-proxy-7vj6f" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.445415   14602 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-522394" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.561529   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:08.713888   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:08.714544   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:08.845735   14602 pod_ready.go:93] pod "kube-scheduler-addons-522394" in "kube-system" namespace has status "Ready":"True"
	I1202 11:32:08.845764   14602 pod_ready.go:82] duration metric: took 400.341951ms for pod "kube-scheduler-addons-522394" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.845775   14602 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.935615   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:09.062216   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:09.212816   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:09.213973   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:09.435700   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:09.562262   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:09.715359   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:09.716284   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:09.936041   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:10.103453   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:10.214031   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:10.216290   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:10.505027   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:10.603838   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:10.716891   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:10.718010   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:10.913902   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:11.005742   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:11.104418   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:11.216927   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:11.217937   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:11.507449   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:11.603957   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:11.717527   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:11.719040   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:11.935482   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:12.061591   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:12.213518   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:12.214745   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:12.435255   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:12.562713   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:12.713623   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:12.715758   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:12.936213   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:13.103343   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:13.214492   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:13.214847   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:13.351329   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:13.435799   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:13.562048   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:13.714604   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:13.715344   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:13.936029   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:14.062925   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:14.215328   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:14.215460   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:14.436073   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:14.561828   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:14.714083   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:14.714644   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:14.936639   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:15.062120   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:15.214771   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:15.215320   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:15.351955   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:15.435757   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:15.562113   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:15.715368   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:15.715413   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:15.936239   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:16.062660   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:16.213741   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:16.214542   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:16.435573   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:16.561801   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:16.713926   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:16.714962   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:16.935960   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:17.061647   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:17.213814   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:17.215153   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:17.436429   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:17.561946   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:17.713677   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:17.715137   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:17.851694   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:17.935888   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:18.062724   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:18.213647   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:18.215130   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:18.438875   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:18.561896   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:18.715047   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:18.715500   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:18.936133   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:19.103642   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:19.214302   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:19.215284   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:19.509538   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:19.601673   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:19.714257   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:19.714884   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:19.902577   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:19.936033   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:20.062136   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:20.213710   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:20.215050   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:20.435781   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:20.561873   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:20.714139   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:20.714448   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:20.936352   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:21.062456   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:21.259425   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:21.259928   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:21.437320   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:21.562374   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:21.713729   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:21.714297   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:21.935844   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:22.063340   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:22.213397   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:22.214638   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:22.352099   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:22.435788   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:22.562089   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:22.714097   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:22.714555   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:22.936673   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:23.062499   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:23.213814   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:23.214508   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:23.436650   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:23.562251   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:23.713346   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:23.714315   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:23.935858   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:24.062606   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:24.213670   14602 kapi.go:107] duration metric: took 53.003885246s to wait for kubernetes.io/minikube-addons=registry ...
	I1202 11:32:24.214558   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:24.435190   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:24.562531   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:24.715304   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:24.907605   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:25.005506   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:25.103671   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:25.215923   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:25.436660   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:25.562473   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:25.714853   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:25.935830   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:26.061963   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:26.215469   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:26.436104   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:26.561879   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:26.715579   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:26.935666   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:27.062520   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:27.214609   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:27.350661   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:27.435745   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:27.562302   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:27.714158   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:27.935962   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:28.101748   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:28.215334   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:28.435069   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:28.562392   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:28.714806   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:28.936078   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:29.103502   14602 kapi.go:107] duration metric: took 53.544924052s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1202 11:32:29.105287   14602 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-522394 cluster.
	I1202 11:32:29.106847   14602 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1202 11:32:29.109263   14602 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1202 11:32:29.215008   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:29.351421   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:29.436468   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:29.715615   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:29.935807   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:30.214922   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:30.436481   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:30.715287   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:31.006638   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:31.215631   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:31.406529   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:31.505951   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:31.719179   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:31.937642   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:32.215434   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:32.436006   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:32.715459   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:32.935340   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:33.215589   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:33.435771   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:33.715028   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:33.852153   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:33.935695   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:34.216586   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:34.435895   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:34.714446   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:34.935909   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:35.214936   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:35.436161   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:35.715186   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:35.902746   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:35.936938   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:36.216572   14602 kapi.go:107] duration metric: took 1m5.005641999s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1202 11:32:36.436261   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:36.935920   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:37.435329   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:37.935481   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:38.351060   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:38.436084   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:38.936743   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:39.436403   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:39.935874   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:40.351454   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:40.437019   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:40.935677   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:41.439488   14602 kapi.go:107] duration metric: took 1m8.508222368s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1202 11:32:41.441284   14602 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, amd-gpu-device-plugin, inspektor-gadget, nvidia-device-plugin, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1202 11:32:41.443272   14602 addons.go:510] duration metric: took 1m16.473059725s for enable addons: enabled=[storage-provisioner cloud-spanner ingress-dns amd-gpu-device-plugin inspektor-gadget nvidia-device-plugin metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1202 11:32:42.351544   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:44.851785   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:47.351839   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:49.850755   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:51.851399   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:54.350774   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:56.351373   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:58.351619   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:33:00.851493   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:33:02.851739   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:33:04.352033   14602 pod_ready.go:93] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"True"
	I1202 11:33:04.352062   14602 pod_ready.go:82] duration metric: took 55.506278545s for pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace to be "Ready" ...
	I1202 11:33:04.352074   14602 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-kwcbg" in "kube-system" namespace to be "Ready" ...
	I1202 11:33:04.356463   14602 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-kwcbg" in "kube-system" namespace has status "Ready":"True"
	I1202 11:33:04.356487   14602 pod_ready.go:82] duration metric: took 4.405567ms for pod "nvidia-device-plugin-daemonset-kwcbg" in "kube-system" namespace to be "Ready" ...
	I1202 11:33:04.356512   14602 pod_ready.go:39] duration metric: took 1m20.339620891s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:33:04.356534   14602 api_server.go:52] waiting for apiserver process to appear ...
	I1202 11:33:04.356573   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 11:33:04.356629   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 11:33:04.390119   14602 cri.go:89] found id: "def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc"
	I1202 11:33:04.390141   14602 cri.go:89] found id: ""
	I1202 11:33:04.390151   14602 logs.go:282] 1 containers: [def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc]
	I1202 11:33:04.390207   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:04.393410   14602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 11:33:04.393472   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 11:33:04.427097   14602 cri.go:89] found id: "ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5"
	I1202 11:33:04.427126   14602 cri.go:89] found id: ""
	I1202 11:33:04.427136   14602 logs.go:282] 1 containers: [ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5]
	I1202 11:33:04.427182   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:04.430466   14602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 11:33:04.430528   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 11:33:04.462926   14602 cri.go:89] found id: "9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad"
	I1202 11:33:04.462951   14602 cri.go:89] found id: ""
	I1202 11:33:04.462959   14602 logs.go:282] 1 containers: [9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad]
	I1202 11:33:04.462997   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:04.466248   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 11:33:04.466300   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 11:33:04.499482   14602 cri.go:89] found id: "0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0"
	I1202 11:33:04.499503   14602 cri.go:89] found id: ""
	I1202 11:33:04.499514   14602 logs.go:282] 1 containers: [0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0]
	I1202 11:33:04.499570   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:04.503014   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 11:33:04.503087   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 11:33:04.535453   14602 cri.go:89] found id: "407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402"
	I1202 11:33:04.535476   14602 cri.go:89] found id: ""
	I1202 11:33:04.535483   14602 logs.go:282] 1 containers: [407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402]
	I1202 11:33:04.535521   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:04.538678   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 11:33:04.538730   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 11:33:04.571654   14602 cri.go:89] found id: "a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611"
	I1202 11:33:04.571680   14602 cri.go:89] found id: ""
	I1202 11:33:04.571688   14602 logs.go:282] 1 containers: [a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611]
	I1202 11:33:04.571728   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:04.575208   14602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 11:33:04.575275   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 11:33:04.608516   14602 cri.go:89] found id: "4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c"
	I1202 11:33:04.608541   14602 cri.go:89] found id: ""
	I1202 11:33:04.608548   14602 logs.go:282] 1 containers: [4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c]
	I1202 11:33:04.608598   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:04.611910   14602 logs.go:123] Gathering logs for describe nodes ...
	I1202 11:33:04.611937   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 11:33:04.707312   14602 logs.go:123] Gathering logs for coredns [9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad] ...
	I1202 11:33:04.707343   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad"
	I1202 11:33:04.741712   14602 logs.go:123] Gathering logs for kube-proxy [407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402] ...
	I1202 11:33:04.741741   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402"
	I1202 11:33:04.773702   14602 logs.go:123] Gathering logs for kube-controller-manager [a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611] ...
	I1202 11:33:04.773727   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611"
	I1202 11:33:04.834157   14602 logs.go:123] Gathering logs for kindnet [4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c] ...
	I1202 11:33:04.834193   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c"
	I1202 11:33:04.868591   14602 logs.go:123] Gathering logs for kubelet ...
	I1202 11:33:04.868619   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 11:33:04.957351   14602 logs.go:123] Gathering logs for dmesg ...
	I1202 11:33:04.957398   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 11:33:04.969803   14602 logs.go:123] Gathering logs for kube-scheduler [0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0] ...
	I1202 11:33:04.969836   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0"
	I1202 11:33:05.009507   14602 logs.go:123] Gathering logs for CRI-O ...
	I1202 11:33:05.009545   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 11:33:05.082024   14602 logs.go:123] Gathering logs for container status ...
	I1202 11:33:05.082059   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 11:33:05.122143   14602 logs.go:123] Gathering logs for kube-apiserver [def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc] ...
	I1202 11:33:05.122169   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc"
	I1202 11:33:05.164322   14602 logs.go:123] Gathering logs for etcd [ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5] ...
	I1202 11:33:05.164353   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5"
	I1202 11:33:07.714921   14602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 11:33:07.729411   14602 api_server.go:72] duration metric: took 1m42.759234451s to wait for apiserver process to appear ...
	I1202 11:33:07.729435   14602 api_server.go:88] waiting for apiserver healthz status ...
	I1202 11:33:07.729476   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 11:33:07.729527   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 11:33:07.762364   14602 cri.go:89] found id: "def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc"
	I1202 11:33:07.762384   14602 cri.go:89] found id: ""
	I1202 11:33:07.762394   14602 logs.go:282] 1 containers: [def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc]
	I1202 11:33:07.762460   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:07.765807   14602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 11:33:07.765867   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 11:33:07.798732   14602 cri.go:89] found id: "ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5"
	I1202 11:33:07.798754   14602 cri.go:89] found id: ""
	I1202 11:33:07.798762   14602 logs.go:282] 1 containers: [ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5]
	I1202 11:33:07.798814   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:07.802528   14602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 11:33:07.802597   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 11:33:07.835364   14602 cri.go:89] found id: "9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad"
	I1202 11:33:07.835382   14602 cri.go:89] found id: ""
	I1202 11:33:07.835390   14602 logs.go:282] 1 containers: [9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad]
	I1202 11:33:07.835443   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:07.838655   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 11:33:07.838718   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 11:33:07.871286   14602 cri.go:89] found id: "0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0"
	I1202 11:33:07.871306   14602 cri.go:89] found id: ""
	I1202 11:33:07.871314   14602 logs.go:282] 1 containers: [0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0]
	I1202 11:33:07.871359   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:07.874700   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 11:33:07.874760   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 11:33:07.908903   14602 cri.go:89] found id: "407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402"
	I1202 11:33:07.908930   14602 cri.go:89] found id: ""
	I1202 11:33:07.908940   14602 logs.go:282] 1 containers: [407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402]
	I1202 11:33:07.908982   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:07.912406   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 11:33:07.912470   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 11:33:07.945015   14602 cri.go:89] found id: "a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611"
	I1202 11:33:07.945034   14602 cri.go:89] found id: ""
	I1202 11:33:07.945042   14602 logs.go:282] 1 containers: [a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611]
	I1202 11:33:07.945094   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:07.948378   14602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 11:33:07.948433   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 11:33:07.981128   14602 cri.go:89] found id: "4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c"
	I1202 11:33:07.981153   14602 cri.go:89] found id: ""
	I1202 11:33:07.981161   14602 logs.go:282] 1 containers: [4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c]
	I1202 11:33:07.981206   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:07.984527   14602 logs.go:123] Gathering logs for kube-scheduler [0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0] ...
	I1202 11:33:07.984552   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0"
	I1202 11:33:08.023077   14602 logs.go:123] Gathering logs for kube-proxy [407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402] ...
	I1202 11:33:08.023111   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402"
	I1202 11:33:08.055977   14602 logs.go:123] Gathering logs for kindnet [4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c] ...
	I1202 11:33:08.056003   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c"
	I1202 11:33:08.088171   14602 logs.go:123] Gathering logs for CRI-O ...
	I1202 11:33:08.088194   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 11:33:08.165244   14602 logs.go:123] Gathering logs for etcd [ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5] ...
	I1202 11:33:08.165279   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5"
	I1202 11:33:08.215981   14602 logs.go:123] Gathering logs for coredns [9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad] ...
	I1202 11:33:08.216014   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad"
	I1202 11:33:08.250986   14602 logs.go:123] Gathering logs for describe nodes ...
	I1202 11:33:08.251018   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 11:33:08.348309   14602 logs.go:123] Gathering logs for kube-apiserver [def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc] ...
	I1202 11:33:08.348340   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc"
	I1202 11:33:08.392047   14602 logs.go:123] Gathering logs for kube-controller-manager [a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611] ...
	I1202 11:33:08.392080   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611"
	I1202 11:33:08.447661   14602 logs.go:123] Gathering logs for container status ...
	I1202 11:33:08.447697   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 11:33:08.488878   14602 logs.go:123] Gathering logs for kubelet ...
	I1202 11:33:08.488907   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 11:33:08.570123   14602 logs.go:123] Gathering logs for dmesg ...
	I1202 11:33:08.570159   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 11:33:11.083340   14602 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:33:11.087097   14602 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1202 11:33:11.088008   14602 api_server.go:141] control plane version: v1.31.2
	I1202 11:33:11.088030   14602 api_server.go:131] duration metric: took 3.358589227s to wait for apiserver health ...
	I1202 11:33:11.088039   14602 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 11:33:11.088059   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 11:33:11.088112   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 11:33:11.122112   14602 cri.go:89] found id: "def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc"
	I1202 11:33:11.122131   14602 cri.go:89] found id: ""
	I1202 11:33:11.122139   14602 logs.go:282] 1 containers: [def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc]
	I1202 11:33:11.122178   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:11.125452   14602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 11:33:11.125501   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 11:33:11.158543   14602 cri.go:89] found id: "ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5"
	I1202 11:33:11.158565   14602 cri.go:89] found id: ""
	I1202 11:33:11.158573   14602 logs.go:282] 1 containers: [ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5]
	I1202 11:33:11.158616   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:11.161945   14602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 11:33:11.161995   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 11:33:11.194572   14602 cri.go:89] found id: "9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad"
	I1202 11:33:11.194598   14602 cri.go:89] found id: ""
	I1202 11:33:11.194607   14602 logs.go:282] 1 containers: [9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad]
	I1202 11:33:11.194652   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:11.198084   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 11:33:11.198135   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 11:33:11.231901   14602 cri.go:89] found id: "0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0"
	I1202 11:33:11.231924   14602 cri.go:89] found id: ""
	I1202 11:33:11.231931   14602 logs.go:282] 1 containers: [0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0]
	I1202 11:33:11.231972   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:11.235216   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 11:33:11.235266   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 11:33:11.268737   14602 cri.go:89] found id: "407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402"
	I1202 11:33:11.268757   14602 cri.go:89] found id: ""
	I1202 11:33:11.268765   14602 logs.go:282] 1 containers: [407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402]
	I1202 11:33:11.268805   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:11.272029   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 11:33:11.272099   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 11:33:11.304430   14602 cri.go:89] found id: "a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611"
	I1202 11:33:11.304458   14602 cri.go:89] found id: ""
	I1202 11:33:11.304469   14602 logs.go:282] 1 containers: [a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611]
	I1202 11:33:11.304512   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:11.307791   14602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 11:33:11.307845   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 11:33:11.341205   14602 cri.go:89] found id: "4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c"
	I1202 11:33:11.341246   14602 cri.go:89] found id: ""
	I1202 11:33:11.341257   14602 logs.go:282] 1 containers: [4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c]
	I1202 11:33:11.341315   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:11.344900   14602 logs.go:123] Gathering logs for dmesg ...
	I1202 11:33:11.344931   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 11:33:11.356614   14602 logs.go:123] Gathering logs for describe nodes ...
	I1202 11:33:11.356637   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 11:33:11.544336   14602 logs.go:123] Gathering logs for kube-scheduler [0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0] ...
	I1202 11:33:11.544362   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0"
	I1202 11:33:11.636199   14602 logs.go:123] Gathering logs for kube-proxy [407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402] ...
	I1202 11:33:11.636234   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402"
	I1202 11:33:11.669482   14602 logs.go:123] Gathering logs for kube-controller-manager [a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611] ...
	I1202 11:33:11.669513   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611"
	I1202 11:33:11.724795   14602 logs.go:123] Gathering logs for container status ...
	I1202 11:33:11.724827   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 11:33:11.766225   14602 logs.go:123] Gathering logs for kubelet ...
	I1202 11:33:11.766255   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 11:33:11.853567   14602 logs.go:123] Gathering logs for kube-apiserver [def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc] ...
	I1202 11:33:11.853609   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc"
	I1202 11:33:11.897592   14602 logs.go:123] Gathering logs for etcd [ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5] ...
	I1202 11:33:11.897630   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5"
	I1202 11:33:11.946691   14602 logs.go:123] Gathering logs for coredns [9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad] ...
	I1202 11:33:11.946726   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad"
	I1202 11:33:11.981385   14602 logs.go:123] Gathering logs for kindnet [4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c] ...
	I1202 11:33:11.981416   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c"
	I1202 11:33:12.014723   14602 logs.go:123] Gathering logs for CRI-O ...
	I1202 11:33:12.014753   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 11:33:14.594602   14602 system_pods.go:59] 19 kube-system pods found
	I1202 11:33:14.594647   14602 system_pods.go:61] "amd-gpu-device-plugin-czks8" [28b7071f-be42-4af7-bcb6-44dcf77d9d72] Running
	I1202 11:33:14.594658   14602 system_pods.go:61] "coredns-7c65d6cfc9-2cr8g" [21278506-daa2-47ba-87a6-bd0a841d3f2f] Running
	I1202 11:33:14.594663   14602 system_pods.go:61] "csi-hostpath-attacher-0" [4c389e45-1a9d-4eee-90be-e9fac8b383e0] Running
	I1202 11:33:14.594668   14602 system_pods.go:61] "csi-hostpath-resizer-0" [5a26dca9-12e6-468f-9bb4-3e1ab16070e6] Running
	I1202 11:33:14.594673   14602 system_pods.go:61] "csi-hostpathplugin-cwsfz" [38d189a6-30cc-4de4-9554-b7b17ccabac5] Running
	I1202 11:33:14.594679   14602 system_pods.go:61] "etcd-addons-522394" [5900a6e2-e94e-45e3-8761-57ae9adb4852] Running
	I1202 11:33:14.594685   14602 system_pods.go:61] "kindnet-p2kn5" [f01c6cb1-1b80-489f-8f17-8cbd5b23bbad] Running
	I1202 11:33:14.594692   14602 system_pods.go:61] "kube-apiserver-addons-522394" [567d2d63-09b9-47d3-b623-c0841253d8a2] Running
	I1202 11:33:14.594697   14602 system_pods.go:61] "kube-controller-manager-addons-522394" [346cc3c6-56e4-41cc-bdaf-b83bc67642fa] Running
	I1202 11:33:14.594703   14602 system_pods.go:61] "kube-ingress-dns-minikube" [3438f8b3-3a02-44ca-af0e-0ae8f347d465] Running
	I1202 11:33:14.594711   14602 system_pods.go:61] "kube-proxy-7vj6f" [31c251d6-04a9-4ccc-858e-f070357e572a] Running
	I1202 11:33:14.594717   14602 system_pods.go:61] "kube-scheduler-addons-522394" [3c312aab-7760-497e-a3f3-6e527a60576f] Running
	I1202 11:33:14.594723   14602 system_pods.go:61] "metrics-server-84c5f94fbc-cmfs5" [d201f129-cdd9-474b-90ff-b22982035951] Running
	I1202 11:33:14.594730   14602 system_pods.go:61] "nvidia-device-plugin-daemonset-kwcbg" [e45feff4-5960-425e-9363-207b937d3696] Running
	I1202 11:33:14.594739   14602 system_pods.go:61] "registry-66c9cd494c-vdszr" [2c730b2c-d2ab-48fe-8268-0064ccf42ac1] Running
	I1202 11:33:14.594745   14602 system_pods.go:61] "registry-proxy-9xwj9" [9c2a618e-304b-4aef-b3a1-3daca132483a] Running
	I1202 11:33:14.594752   14602 system_pods.go:61] "snapshot-controller-56fcc65765-c8r8s" [f73951d4-ec85-4a1d-abac-ba3b7a4431e5] Running
	I1202 11:33:14.594758   14602 system_pods.go:61] "snapshot-controller-56fcc65765-dxlg6" [f401a09e-b82e-4309-afdf-e1f62db25a08] Running
	I1202 11:33:14.594767   14602 system_pods.go:61] "storage-provisioner" [98cd8826-798c-4d91-8c3f-77c5470e5fad] Running
	I1202 11:33:14.594775   14602 system_pods.go:74] duration metric: took 3.50672999s to wait for pod list to return data ...
	I1202 11:33:14.594788   14602 default_sa.go:34] waiting for default service account to be created ...
	I1202 11:33:14.597006   14602 default_sa.go:45] found service account: "default"
	I1202 11:33:14.597024   14602 default_sa.go:55] duration metric: took 2.229729ms for default service account to be created ...
	I1202 11:33:14.597032   14602 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 11:33:14.605392   14602 system_pods.go:86] 19 kube-system pods found
	I1202 11:33:14.605418   14602 system_pods.go:89] "amd-gpu-device-plugin-czks8" [28b7071f-be42-4af7-bcb6-44dcf77d9d72] Running
	I1202 11:33:14.605426   14602 system_pods.go:89] "coredns-7c65d6cfc9-2cr8g" [21278506-daa2-47ba-87a6-bd0a841d3f2f] Running
	I1202 11:33:14.605430   14602 system_pods.go:89] "csi-hostpath-attacher-0" [4c389e45-1a9d-4eee-90be-e9fac8b383e0] Running
	I1202 11:33:14.605434   14602 system_pods.go:89] "csi-hostpath-resizer-0" [5a26dca9-12e6-468f-9bb4-3e1ab16070e6] Running
	I1202 11:33:14.605439   14602 system_pods.go:89] "csi-hostpathplugin-cwsfz" [38d189a6-30cc-4de4-9554-b7b17ccabac5] Running
	I1202 11:33:14.605443   14602 system_pods.go:89] "etcd-addons-522394" [5900a6e2-e94e-45e3-8761-57ae9adb4852] Running
	I1202 11:33:14.605447   14602 system_pods.go:89] "kindnet-p2kn5" [f01c6cb1-1b80-489f-8f17-8cbd5b23bbad] Running
	I1202 11:33:14.605451   14602 system_pods.go:89] "kube-apiserver-addons-522394" [567d2d63-09b9-47d3-b623-c0841253d8a2] Running
	I1202 11:33:14.605455   14602 system_pods.go:89] "kube-controller-manager-addons-522394" [346cc3c6-56e4-41cc-bdaf-b83bc67642fa] Running
	I1202 11:33:14.605459   14602 system_pods.go:89] "kube-ingress-dns-minikube" [3438f8b3-3a02-44ca-af0e-0ae8f347d465] Running
	I1202 11:33:14.605466   14602 system_pods.go:89] "kube-proxy-7vj6f" [31c251d6-04a9-4ccc-858e-f070357e572a] Running
	I1202 11:33:14.605469   14602 system_pods.go:89] "kube-scheduler-addons-522394" [3c312aab-7760-497e-a3f3-6e527a60576f] Running
	I1202 11:33:14.605476   14602 system_pods.go:89] "metrics-server-84c5f94fbc-cmfs5" [d201f129-cdd9-474b-90ff-b22982035951] Running
	I1202 11:33:14.605481   14602 system_pods.go:89] "nvidia-device-plugin-daemonset-kwcbg" [e45feff4-5960-425e-9363-207b937d3696] Running
	I1202 11:33:14.605487   14602 system_pods.go:89] "registry-66c9cd494c-vdszr" [2c730b2c-d2ab-48fe-8268-0064ccf42ac1] Running
	I1202 11:33:14.605491   14602 system_pods.go:89] "registry-proxy-9xwj9" [9c2a618e-304b-4aef-b3a1-3daca132483a] Running
	I1202 11:33:14.605494   14602 system_pods.go:89] "snapshot-controller-56fcc65765-c8r8s" [f73951d4-ec85-4a1d-abac-ba3b7a4431e5] Running
	I1202 11:33:14.605497   14602 system_pods.go:89] "snapshot-controller-56fcc65765-dxlg6" [f401a09e-b82e-4309-afdf-e1f62db25a08] Running
	I1202 11:33:14.605500   14602 system_pods.go:89] "storage-provisioner" [98cd8826-798c-4d91-8c3f-77c5470e5fad] Running
	I1202 11:33:14.605509   14602 system_pods.go:126] duration metric: took 8.472356ms to wait for k8s-apps to be running ...
	I1202 11:33:14.605518   14602 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 11:33:14.605557   14602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:33:14.616788   14602 system_svc.go:56] duration metric: took 11.262405ms WaitForService to wait for kubelet
	I1202 11:33:14.616812   14602 kubeadm.go:582] duration metric: took 1m49.646640687s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:33:14.616832   14602 node_conditions.go:102] verifying NodePressure condition ...
	I1202 11:33:14.619796   14602 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 11:33:14.619821   14602 node_conditions.go:123] node cpu capacity is 8
	I1202 11:33:14.619836   14602 node_conditions.go:105] duration metric: took 2.99908ms to run NodePressure ...
	I1202 11:33:14.619850   14602 start.go:241] waiting for startup goroutines ...
	I1202 11:33:14.619859   14602 start.go:246] waiting for cluster config update ...
	I1202 11:33:14.619880   14602 start.go:255] writing updated cluster config ...
	I1202 11:33:14.620149   14602 ssh_runner.go:195] Run: rm -f paused
	I1202 11:33:14.667932   14602 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 11:33:14.670094   14602 out.go:177] * Done! kubectl is now configured to use "addons-522394" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 11:35:19 addons-522394 crio[1032]: time="2024-12-02 11:35:19.607377104Z" level=info msg="Removed pod sandbox: 60e27d37fb3e51792185e63a2b082b1ea585a5411b519f35a8b0febb9838c689" id=662418a3-3d71-49b9-b610-31f03778a6cc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 11:35:19 addons-522394 crio[1032]: time="2024-12-02 11:35:19.607810349Z" level=info msg="Stopping pod sandbox: 600ecb1700f514077a261ce000d803298d127f1aa088114fa6bf8cb8ed8ecca4" id=8f412547-3300-408e-a4a3-a78d31720ab3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 11:35:19 addons-522394 crio[1032]: time="2024-12-02 11:35:19.607841112Z" level=info msg="Stopped pod sandbox (already stopped): 600ecb1700f514077a261ce000d803298d127f1aa088114fa6bf8cb8ed8ecca4" id=8f412547-3300-408e-a4a3-a78d31720ab3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 11:35:19 addons-522394 crio[1032]: time="2024-12-02 11:35:19.608101903Z" level=info msg="Removing pod sandbox: 600ecb1700f514077a261ce000d803298d127f1aa088114fa6bf8cb8ed8ecca4" id=ae6f649f-49b3-4896-baf7-3e3de742cffd name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 11:35:19 addons-522394 crio[1032]: time="2024-12-02 11:35:19.614111708Z" level=info msg="Removed pod sandbox: 600ecb1700f514077a261ce000d803298d127f1aa088114fa6bf8cb8ed8ecca4" id=ae6f649f-49b3-4896-baf7-3e3de742cffd name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 11:35:19 addons-522394 crio[1032]: time="2024-12-02 11:35:19.614497406Z" level=info msg="Stopping pod sandbox: c6e59aafade5ad85b1a33249076076ad895d1100cc12a8669eb8add17aabf2be" id=f7f07b1f-ff70-41fb-bd67-4c6c2ab86074 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 11:35:19 addons-522394 crio[1032]: time="2024-12-02 11:35:19.614528655Z" level=info msg="Stopped pod sandbox (already stopped): c6e59aafade5ad85b1a33249076076ad895d1100cc12a8669eb8add17aabf2be" id=f7f07b1f-ff70-41fb-bd67-4c6c2ab86074 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 11:35:19 addons-522394 crio[1032]: time="2024-12-02 11:35:19.614786318Z" level=info msg="Removing pod sandbox: c6e59aafade5ad85b1a33249076076ad895d1100cc12a8669eb8add17aabf2be" id=0acd4284-f94e-4f3a-84d7-d68bf9c037fa name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 11:35:19 addons-522394 crio[1032]: time="2024-12-02 11:35:19.620674162Z" level=info msg="Removed pod sandbox: c6e59aafade5ad85b1a33249076076ad895d1100cc12a8669eb8add17aabf2be" id=0acd4284-f94e-4f3a-84d7-d68bf9c037fa name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 11:35:19 addons-522394 crio[1032]: time="2024-12-02 11:35:19.621104652Z" level=info msg="Stopping pod sandbox: 29286e8d031da9c06fd19c78152376e39d93af819b3620357087d4b2e3e92b1a" id=7aab885a-73cd-423d-b8b6-31a1bedda0d5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 11:35:19 addons-522394 crio[1032]: time="2024-12-02 11:35:19.621141710Z" level=info msg="Stopped pod sandbox (already stopped): 29286e8d031da9c06fd19c78152376e39d93af819b3620357087d4b2e3e92b1a" id=7aab885a-73cd-423d-b8b6-31a1bedda0d5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 11:35:19 addons-522394 crio[1032]: time="2024-12-02 11:35:19.621459403Z" level=info msg="Removing pod sandbox: 29286e8d031da9c06fd19c78152376e39d93af819b3620357087d4b2e3e92b1a" id=ae8563f6-2282-46e9-8fdd-bb8bfbf88083 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 11:35:19 addons-522394 crio[1032]: time="2024-12-02 11:35:19.627036247Z" level=info msg="Removed pod sandbox: 29286e8d031da9c06fd19c78152376e39d93af819b3620357087d4b2e3e92b1a" id=ae8563f6-2282-46e9-8fdd-bb8bfbf88083 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 11:36:03 addons-522394 crio[1032]: time="2024-12-02 11:36:03.279813115Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-src62/POD" id=e617b3d1-3009-4c3c-92f6-fa20c515d3b4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 11:36:03 addons-522394 crio[1032]: time="2024-12-02 11:36:03.279899090Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 02 11:36:03 addons-522394 crio[1032]: time="2024-12-02 11:36:03.316020958Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-src62 Namespace:default ID:c5c0ca3092f2cf9f55187a3e9eb9f0a6325adf80275a99949839274491dbaf3a UID:f222806c-0732-46a8-af48-5e5cf69caf34 NetNS:/var/run/netns/2ed257fc-ecd4-4df2-9552-73626b6bebfe Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 02 11:36:03 addons-522394 crio[1032]: time="2024-12-02 11:36:03.316065564Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-src62 to CNI network \"kindnet\" (type=ptp)"
	Dec 02 11:36:03 addons-522394 crio[1032]: time="2024-12-02 11:36:03.325407506Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-src62 Namespace:default ID:c5c0ca3092f2cf9f55187a3e9eb9f0a6325adf80275a99949839274491dbaf3a UID:f222806c-0732-46a8-af48-5e5cf69caf34 NetNS:/var/run/netns/2ed257fc-ecd4-4df2-9552-73626b6bebfe Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 02 11:36:03 addons-522394 crio[1032]: time="2024-12-02 11:36:03.325532791Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-src62 for CNI network kindnet (type=ptp)"
	Dec 02 11:36:03 addons-522394 crio[1032]: time="2024-12-02 11:36:03.327638914Z" level=info msg="Ran pod sandbox c5c0ca3092f2cf9f55187a3e9eb9f0a6325adf80275a99949839274491dbaf3a with infra container: default/hello-world-app-55bf9c44b4-src62/POD" id=e617b3d1-3009-4c3c-92f6-fa20c515d3b4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 02 11:36:03 addons-522394 crio[1032]: time="2024-12-02 11:36:03.328766548Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=473e510b-dae3-406d-8d92-7f6554e43e6a name=/runtime.v1.ImageService/ImageStatus
	Dec 02 11:36:03 addons-522394 crio[1032]: time="2024-12-02 11:36:03.328959168Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=473e510b-dae3-406d-8d92-7f6554e43e6a name=/runtime.v1.ImageService/ImageStatus
	Dec 02 11:36:03 addons-522394 crio[1032]: time="2024-12-02 11:36:03.329426387Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=bdcb1a92-196b-4dc4-946e-4ffe3a39e11b name=/runtime.v1.ImageService/PullImage
	Dec 02 11:36:03 addons-522394 crio[1032]: time="2024-12-02 11:36:03.350867612Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 02 11:36:03 addons-522394 crio[1032]: time="2024-12-02 11:36:03.808917110Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	56aa088d1862e       docker.io/library/nginx@sha256:5acf10cd305853dc2271e3c818d342f3aeb3688b1256ab8f035fda04b91ed303                              2 minutes ago       Running             nginx                     0                   6b1691e2f24d1       nginx
	2db6ab878cea3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   62a465a3bfe95       busybox
	8a910d74d36d8       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   1e672cb075bcc       ingress-nginx-controller-5f85ff4588-zn75n
	d00d1bfa9ce35       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             3 minutes ago       Exited              patch                     2                   ea2d07f4609af       ingress-nginx-admission-patch-j7fb2
	963255c6f0cb5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   eab7ce2752a81       ingress-nginx-admission-create-jrfdn
	b7cbf62e719cd       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   c174ae3606242       local-path-provisioner-86d989889c-qf4zs
	7507379c3f1a3       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago       Running             metrics-server            0                   04d283bcf6fe3       metrics-server-84c5f94fbc-cmfs5
	7f8bee8d09b38       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   a5ed946ce277b       kube-ingress-dns-minikube
	2cf66f4197da9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   75d2768cabdaf       storage-provisioner
	9bbf1ed828b35       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   d45379ea69265       coredns-7c65d6cfc9-2cr8g
	4702bd641c5b0       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16                           4 minutes ago       Running             kindnet-cni               0                   3ed71d144052d       kindnet-p2kn5
	407fff8704469       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             4 minutes ago       Running             kube-proxy                0                   1180b8f94fe78       kube-proxy-7vj6f
	a5b86f11cb862       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             4 minutes ago       Running             kube-controller-manager   0                   debccfba29529       kube-controller-manager-addons-522394
	def246b91f2b5       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             4 minutes ago       Running             kube-apiserver            0                   d29bab926b161       kube-apiserver-addons-522394
	ee5fef32ba1e2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             4 minutes ago       Running             etcd                      0                   4417b3c3533a8       etcd-addons-522394
	0251b2cec71bb       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             4 minutes ago       Running             kube-scheduler            0                   72b2dd0085cf8       kube-scheduler-addons-522394
	
	
	==> coredns [9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad] <==
	[INFO] 10.244.0.18:41632 - 47853 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000077372s
	[INFO] 10.244.0.18:59010 - 61405 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.00467787s
	[INFO] 10.244.0.18:59010 - 61607 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.004761186s
	[INFO] 10.244.0.18:46389 - 43235 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005873928s
	[INFO] 10.244.0.18:46389 - 42950 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005882501s
	[INFO] 10.244.0.18:58749 - 11291 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006203384s
	[INFO] 10.244.0.18:58749 - 11049 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.008369217s
	[INFO] 10.244.0.18:54308 - 51874 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000104157s
	[INFO] 10.244.0.18:54308 - 51616 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00015106s
	[INFO] 10.244.0.21:37309 - 26925 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000230298s
	[INFO] 10.244.0.21:49873 - 21442 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000266275s
	[INFO] 10.244.0.21:40285 - 56317 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000170621s
	[INFO] 10.244.0.21:57343 - 52384 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000216287s
	[INFO] 10.244.0.21:51034 - 46076 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117229s
	[INFO] 10.244.0.21:56952 - 4260 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000184998s
	[INFO] 10.244.0.21:57083 - 36348 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005622559s
	[INFO] 10.244.0.21:49451 - 19495 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005999237s
	[INFO] 10.244.0.21:59425 - 43108 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005929571s
	[INFO] 10.244.0.21:36727 - 60126 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007400343s
	[INFO] 10.244.0.21:35141 - 52336 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006022685s
	[INFO] 10.244.0.21:56188 - 47000 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006569472s
	[INFO] 10.244.0.21:57950 - 56899 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000966781s
	[INFO] 10.244.0.21:34088 - 22724 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000955926s
	[INFO] 10.244.0.25:52258 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000179848s
	[INFO] 10.244.0.25:54689 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000128614s
	
	
	==> describe nodes <==
	Name:               addons-522394
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-522394
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=addons-522394
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_02T11_31_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-522394
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:31:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-522394
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:36:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 11:34:54 +0000   Mon, 02 Dec 2024 11:31:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 11:34:54 +0000   Mon, 02 Dec 2024 11:31:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 11:34:54 +0000   Mon, 02 Dec 2024 11:31:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 11:34:54 +0000   Mon, 02 Dec 2024 11:31:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-522394
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 b86fc5ec93fa45b6a66a36116aa0e647
	  System UUID:                c8f73ecb-2c1a-45b7-87c3-de079fb5e436
	  Boot ID:                    2a9b6797-354b-47aa-b86d-31dcdc265ca8
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m49s
	  default                     hello-world-app-55bf9c44b4-src62             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-zn75n    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m34s
	  kube-system                 coredns-7c65d6cfc9-2cr8g                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m39s
	  kube-system                 etcd-addons-522394                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m45s
	  kube-system                 kindnet-p2kn5                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m40s
	  kube-system                 kube-apiserver-addons-522394                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kube-controller-manager-addons-522394        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 kube-proxy-7vj6f                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-scheduler-addons-522394                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 metrics-server-84c5f94fbc-cmfs5              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  local-path-storage          local-path-provisioner-86d989889c-qf4zs      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m35s  kube-proxy       
	  Normal   Starting                 4m45s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m45s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  4m45s  kubelet          Node addons-522394 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m45s  kubelet          Node addons-522394 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m45s  kubelet          Node addons-522394 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m41s  node-controller  Node addons-522394 event: Registered Node addons-522394 in Controller
	  Normal   NodeReady                4m21s  kubelet          Node addons-522394 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000801] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000892] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.642890] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024824] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.032587] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.029394] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +7.155032] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 2 11:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 ba 02 e3 ad a3 ba 8d 5d 9b 12 72 08 00
	[  +1.007914] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000012] ll header: 00000000: e2 ba 02 e3 ad a3 ba 8d 5d 9b 12 72 08 00
	[  +2.015805] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e2 ba 02 e3 ad a3 ba 8d 5d 9b 12 72 08 00
	[  +4.127504] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 ba 02 e3 ad a3 ba 8d 5d 9b 12 72 08 00
	[Dec 2 11:34] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e2 ba 02 e3 ad a3 ba 8d 5d 9b 12 72 08 00
	[ +16.122279] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 ba 02 e3 ad a3 ba 8d 5d 9b 12 72 08 00
	[ +32.764471] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 ba 02 e3 ad a3 ba 8d 5d 9b 12 72 08 00
	
	
	==> etcd [ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5] <==
	{"level":"warn","ts":"2024-12-02T11:31:28.806544Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.757322ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-522394\" ","response":"range_response_count:1 size:5655"}
	{"level":"info","ts":"2024-12-02T11:31:28.806843Z","caller":"traceutil/trace.go:171","msg":"trace[2056605895] range","detail":"{range_begin:/registry/minions/addons-522394; range_end:; response_count:1; response_revision:424; }","duration":"187.058684ms","start":"2024-12-02T11:31:28.619775Z","end":"2024-12-02T11:31:28.806833Z","steps":["trace[2056605895] 'agreement among raft nodes before linearized reading'  (duration: 186.730267ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:31:28.806574Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.234825ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-12-02T11:31:28.807092Z","caller":"traceutil/trace.go:171","msg":"trace[1199302634] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:424; }","duration":"187.751601ms","start":"2024-12-02T11:31:28.619328Z","end":"2024-12-02T11:31:28.807080Z","steps":["trace[1199302634] 'agreement among raft nodes before linearized reading'  (duration: 186.682713ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:31:28.807517Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.612468ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-12-02T11:31:28.807599Z","caller":"traceutil/trace.go:171","msg":"trace[381932111] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; response_count:1; response_revision:424; }","duration":"186.69389ms","start":"2024-12-02T11:31:28.620893Z","end":"2024-12-02T11:31:28.807587Z","steps":["trace[381932111] 'agreement among raft nodes before linearized reading'  (duration: 186.56975ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:31:28.808133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.452086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-12-02T11:31:28.808473Z","caller":"traceutil/trace.go:171","msg":"trace[296080295] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"105.553912ms","start":"2024-12-02T11:31:28.702903Z","end":"2024-12-02T11:31:28.808457Z","steps":["trace[296080295] 'process raft request'  (duration: 105.014251ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T11:31:28.808598Z","caller":"traceutil/trace.go:171","msg":"trace[681393968] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"101.418513ms","start":"2024-12-02T11:31:28.707173Z","end":"2024-12-02T11:31:28.808591Z","steps":["trace[681393968] 'process raft request'  (duration: 100.804599ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T11:31:28.815520Z","caller":"traceutil/trace.go:171","msg":"trace[1492983176] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:426; }","duration":"114.139526ms","start":"2024-12-02T11:31:28.700670Z","end":"2024-12-02T11:31:28.814809Z","steps":["trace[1492983176] 'agreement among raft nodes before linearized reading'  (duration: 107.343735ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T11:31:29.022068Z","caller":"traceutil/trace.go:171","msg":"trace[1344895774] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"102.581521ms","start":"2024-12-02T11:31:28.919469Z","end":"2024-12-02T11:31:29.022051Z","steps":["trace[1344895774] 'process raft request'  (duration: 95.274828ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:31:29.627269Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.960238ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/gadget/gadget-role-binding\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-02T11:31:29.628691Z","caller":"traceutil/trace.go:171","msg":"trace[933279930] range","detail":"{range_begin:/registry/rolebindings/gadget/gadget-role-binding; range_end:; response_count:0; response_revision:508; }","duration":"104.379146ms","start":"2024-12-02T11:31:29.524290Z","end":"2024-12-02T11:31:29.628669Z","steps":["trace[933279930] 'agreement among raft nodes before linearized reading'  (duration: 92.111489ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T11:31:29.900773Z","caller":"traceutil/trace.go:171","msg":"trace[315314590] transaction","detail":"{read_only:false; response_revision:509; number_of_response:1; }","duration":"184.267399ms","start":"2024-12-02T11:31:29.716484Z","end":"2024-12-02T11:31:29.900752Z","steps":["trace[315314590] 'process raft request'  (duration: 105.458614ms)","trace[315314590] 'compare'  (duration: 78.678611ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-02T11:31:29.902113Z","caller":"traceutil/trace.go:171","msg":"trace[231914416] linearizableReadLoop","detail":"{readStateIndex:523; appliedIndex:520; }","duration":"185.211539ms","start":"2024-12-02T11:31:29.716888Z","end":"2024-12-02T11:31:29.902099Z","steps":["trace[231914416] 'read index received'  (duration: 105.076151ms)","trace[231914416] 'applied index is now lower than readState.Index'  (duration: 80.134644ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-02T11:31:29.903139Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.235365ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" ","response":"range_response_count:1 size:573"}
	{"level":"info","ts":"2024-12-02T11:31:29.908429Z","caller":"traceutil/trace.go:171","msg":"trace[2022297626] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:1; response_revision:516; }","duration":"191.527203ms","start":"2024-12-02T11:31:29.716884Z","end":"2024-12-02T11:31:29.908411Z","steps":["trace[2022297626] 'agreement among raft nodes before linearized reading'  (duration: 186.131649ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T11:31:29.903185Z","caller":"traceutil/trace.go:171","msg":"trace[209334257] transaction","detail":"{read_only:false; response_revision:510; number_of_response:1; }","duration":"186.606746ms","start":"2024-12-02T11:31:29.716566Z","end":"2024-12-02T11:31:29.903173Z","steps":["trace[209334257] 'process raft request'  (duration: 185.266146ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:31:29.908442Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.417349ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/yakd-dashboard/yakd-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-02T11:31:29.909084Z","caller":"traceutil/trace.go:171","msg":"trace[1945374566] range","detail":"{range_begin:/registry/deployments/yakd-dashboard/yakd-dashboard; range_end:; response_count:0; response_revision:516; }","duration":"192.06554ms","start":"2024-12-02T11:31:29.717003Z","end":"2024-12-02T11:31:29.909068Z","steps":["trace[1945374566] 'agreement among raft nodes before linearized reading'  (duration: 191.388367ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T11:31:29.903350Z","caller":"traceutil/trace.go:171","msg":"trace[1853367450] transaction","detail":"{read_only:false; response_revision:512; number_of_response:1; }","duration":"186.386985ms","start":"2024-12-02T11:31:29.716957Z","end":"2024-12-02T11:31:29.903344Z","steps":["trace[1853367450] 'process raft request'  (duration: 184.965319ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T11:31:29.903265Z","caller":"traceutil/trace.go:171","msg":"trace[1147361934] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"186.562916ms","start":"2024-12-02T11:31:29.716686Z","end":"2024-12-02T11:31:29.903249Z","steps":["trace[1147361934] 'process raft request'  (duration: 185.207937ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:32:48.820440Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.099205ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:474"}
	{"level":"info","ts":"2024-12-02T11:32:48.820514Z","caller":"traceutil/trace.go:171","msg":"trace[1203524728] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:1213; }","duration":"122.187111ms","start":"2024-12-02T11:32:48.698312Z","end":"2024-12-02T11:32:48.820500Z","steps":["trace[1203524728] 'range keys from in-memory index tree'  (duration: 122.013044ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T11:32:48.820557Z","caller":"traceutil/trace.go:171","msg":"trace[875834341] transaction","detail":"{read_only:false; response_revision:1214; number_of_response:1; }","duration":"121.646489ms","start":"2024-12-02T11:32:48.698889Z","end":"2024-12-02T11:32:48.820536Z","steps":["trace[875834341] 'process raft request'  (duration: 57.330799ms)","trace[875834341] 'compare'  (duration: 64.180975ms)"],"step_count":2}
	
	
	==> kernel <==
	 11:36:04 up 18 min,  0 users,  load average: 0.38, 0.58, 0.31
	Linux addons-522394 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c] <==
	I1202 11:34:03.403309       1 main.go:301] handling current node
	I1202 11:34:13.404346       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:34:13.404393       1 main.go:301] handling current node
	I1202 11:34:23.410078       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:34:23.410133       1 main.go:301] handling current node
	I1202 11:34:33.401414       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:34:33.401453       1 main.go:301] handling current node
	I1202 11:34:43.404890       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:34:43.404923       1 main.go:301] handling current node
	I1202 11:34:53.402392       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:34:53.402523       1 main.go:301] handling current node
	I1202 11:35:03.404437       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:35:03.404470       1 main.go:301] handling current node
	I1202 11:35:13.409782       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:35:13.409819       1 main.go:301] handling current node
	I1202 11:35:23.410868       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:35:23.410901       1 main.go:301] handling current node
	I1202 11:35:33.401151       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:35:33.401202       1 main.go:301] handling current node
	I1202 11:35:43.408344       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:35:43.408390       1 main.go:301] handling current node
	I1202 11:35:53.408334       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:35:53.408370       1 main.go:301] handling current node
	I1202 11:36:03.402025       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:36:03.402061       1 main.go:301] handling current node
	
	
	==> kube-apiserver [def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc] <==
	E1202 11:33:04.223594       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.91.223:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.91.223:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.91.223:443: connect: connection refused" logger="UnhandledError"
	E1202 11:33:04.225240       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.91.223:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.91.223:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.91.223:443: connect: connection refused" logger="UnhandledError"
	I1202 11:33:04.257410       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1202 11:33:22.348846       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43592: use of closed network connection
	E1202 11:33:22.511236       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43618: use of closed network connection
	I1202 11:33:31.437291       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.173.199"}
	I1202 11:33:37.197862       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1202 11:33:38.314827       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1202 11:33:42.662450       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1202 11:33:42.834492       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.4.66"}
	I1202 11:34:34.595666       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1202 11:34:52.328706       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 11:34:52.328762       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1202 11:34:52.341228       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 11:34:52.341364       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1202 11:34:52.342673       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 11:34:52.342710       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1202 11:34:52.358604       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 11:34:52.358649       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1202 11:34:52.363863       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 11:34:52.363904       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1202 11:34:53.342958       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1202 11:34:53.400473       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1202 11:34:53.409954       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1202 11:36:03.112895       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.181.158"}
	
	
	==> kube-controller-manager [a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611] <==
	E1202 11:35:00.226439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:35:01.058797       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:35:01.058834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:35:02.947431       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:35:02.947471       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:35:03.473970       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:35:03.474007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:35:11.136328       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:35:11.136375       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:35:13.601130       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:35:13.601181       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:35:15.459412       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:35:15.459455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:35:28.304343       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:35:28.304389       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:35:33.327171       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:35:33.327216       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:35:34.904902       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:35:34.904941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:35:47.874327       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:35:47.874371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1202 11:36:02.977953       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.541848ms"
	I1202 11:36:02.982840       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="4.833622ms"
	I1202 11:36:02.982963       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="74.032µs"
	I1202 11:36:02.986409       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="43.555µs"
	
	
	==> kube-proxy [407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402] <==
	I1202 11:31:26.209045       1 server_linux.go:66] "Using iptables proxy"
	I1202 11:31:27.909979       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1202 11:31:27.910061       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 11:31:28.805396       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 11:31:28.805570       1 server_linux.go:169] "Using iptables Proxier"
	I1202 11:31:28.905704       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 11:31:28.906556       1 server.go:483] "Version info" version="v1.31.2"
	I1202 11:31:28.906922       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 11:31:28.908332       1 config.go:199] "Starting service config controller"
	I1202 11:31:28.908357       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1202 11:31:28.908401       1 config.go:105] "Starting endpoint slice config controller"
	I1202 11:31:28.908413       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1202 11:31:28.908988       1 config.go:328] "Starting node config controller"
	I1202 11:31:28.909009       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1202 11:31:29.108916       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1202 11:31:29.112318       1 shared_informer.go:320] Caches are synced for service config
	I1202 11:31:29.112806       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0] <==
	E1202 11:31:17.222151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1202 11:31:17.222265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:17.222337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1202 11:31:17.222367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:17.222710       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1202 11:31:17.222730       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:18.052811       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1202 11:31:18.052849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:18.083338       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1202 11:31:18.083378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:18.133975       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1202 11:31:18.134020       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:18.135974       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1202 11:31:18.136005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:18.242945       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1202 11:31:18.242991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:18.265248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1202 11:31:18.265291       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:18.368794       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1202 11:31:18.368831       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:18.393065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1202 11:31:18.393114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:18.474874       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1202 11:31:18.474907       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1202 11:31:20.618175       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 02 11:35:59 addons-522394 kubelet[1638]: E1202 11:35:59.470557    1638 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139359470347233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616784,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:35:59 addons-522394 kubelet[1638]: E1202 11:35:59.470591    1638 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139359470347233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616784,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: E1202 11:36:02.977985    1638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a26dca9-12e6-468f-9bb4-3e1ab16070e6" containerName="csi-resizer"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: E1202 11:36:02.978030    1638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f73951d4-ec85-4a1d-abac-ba3b7a4431e5" containerName="volume-snapshot-controller"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: E1202 11:36:02.978042    1638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="543148f9-cf20-4bf6-b6e3-064e2187291e" containerName="task-pv-container"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: E1202 11:36:02.978052    1638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="38d189a6-30cc-4de4-9554-b7b17ccabac5" containerName="liveness-probe"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: E1202 11:36:02.978061    1638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="38d189a6-30cc-4de4-9554-b7b17ccabac5" containerName="node-driver-registrar"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: E1202 11:36:02.978072    1638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="38d189a6-30cc-4de4-9554-b7b17ccabac5" containerName="hostpath"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: E1202 11:36:02.978089    1638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="38d189a6-30cc-4de4-9554-b7b17ccabac5" containerName="csi-provisioner"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: E1202 11:36:02.978099    1638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c389e45-1a9d-4eee-90be-e9fac8b383e0" containerName="csi-attacher"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: E1202 11:36:02.978107    1638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f401a09e-b82e-4309-afdf-e1f62db25a08" containerName="volume-snapshot-controller"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: E1202 11:36:02.978116    1638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="38d189a6-30cc-4de4-9554-b7b17ccabac5" containerName="csi-external-health-monitor-controller"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: E1202 11:36:02.978126    1638 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="38d189a6-30cc-4de4-9554-b7b17ccabac5" containerName="csi-snapshotter"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: I1202 11:36:02.978181    1638 memory_manager.go:354] "RemoveStaleState removing state" podUID="f73951d4-ec85-4a1d-abac-ba3b7a4431e5" containerName="volume-snapshot-controller"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: I1202 11:36:02.978193    1638 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d189a6-30cc-4de4-9554-b7b17ccabac5" containerName="hostpath"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: I1202 11:36:02.978201    1638 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d189a6-30cc-4de4-9554-b7b17ccabac5" containerName="liveness-probe"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: I1202 11:36:02.978209    1638 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d189a6-30cc-4de4-9554-b7b17ccabac5" containerName="csi-snapshotter"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: I1202 11:36:02.978217    1638 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c389e45-1a9d-4eee-90be-e9fac8b383e0" containerName="csi-attacher"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: I1202 11:36:02.978226    1638 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d189a6-30cc-4de4-9554-b7b17ccabac5" containerName="csi-external-health-monitor-controller"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: I1202 11:36:02.978234    1638 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d189a6-30cc-4de4-9554-b7b17ccabac5" containerName="node-driver-registrar"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: I1202 11:36:02.978241    1638 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a26dca9-12e6-468f-9bb4-3e1ab16070e6" containerName="csi-resizer"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: I1202 11:36:02.978248    1638 memory_manager.go:354] "RemoveStaleState removing state" podUID="38d189a6-30cc-4de4-9554-b7b17ccabac5" containerName="csi-provisioner"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: I1202 11:36:02.978258    1638 memory_manager.go:354] "RemoveStaleState removing state" podUID="543148f9-cf20-4bf6-b6e3-064e2187291e" containerName="task-pv-container"
	Dec 02 11:36:02 addons-522394 kubelet[1638]: I1202 11:36:02.978266    1638 memory_manager.go:354] "RemoveStaleState removing state" podUID="f401a09e-b82e-4309-afdf-e1f62db25a08" containerName="volume-snapshot-controller"
	Dec 02 11:36:03 addons-522394 kubelet[1638]: I1202 11:36:03.150889    1638 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g8cv\" (UniqueName: \"kubernetes.io/projected/f222806c-0732-46a8-af48-5e5cf69caf34-kube-api-access-9g8cv\") pod \"hello-world-app-55bf9c44b4-src62\" (UID: \"f222806c-0732-46a8-af48-5e5cf69caf34\") " pod="default/hello-world-app-55bf9c44b4-src62"
	
	
	==> storage-provisioner [2cf66f4197da94cf92c19f51ce8b19fa55016456ce2724546fb6029163181857] <==
	I1202 11:31:44.852492       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 11:31:44.903483       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 11:31:44.903536       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1202 11:31:44.911630       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 11:31:44.911785       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-522394_46bfd9a2-e490-4934-9b3a-74d022b3a481!
	I1202 11:31:44.911752       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fcfb205a-9f1e-4bc8-a96c-f8c7c0f764b9", APIVersion:"v1", ResourceVersion:"894", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-522394_46bfd9a2-e490-4934-9b3a-74d022b3a481 became leader
	I1202 11:31:45.012695       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-522394_46bfd9a2-e490-4934-9b3a-74d022b3a481!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-522394 -n addons-522394
helpers_test.go:261: (dbg) Run:  kubectl --context addons-522394 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-src62 ingress-nginx-admission-create-jrfdn ingress-nginx-admission-patch-j7fb2
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-522394 describe pod hello-world-app-55bf9c44b4-src62 ingress-nginx-admission-create-jrfdn ingress-nginx-admission-patch-j7fb2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-522394 describe pod hello-world-app-55bf9c44b4-src62 ingress-nginx-admission-create-jrfdn ingress-nginx-admission-patch-j7fb2: exit status 1 (66.999902ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-src62
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-522394/192.168.49.2
	Start Time:       Mon, 02 Dec 2024 11:36:02 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9g8cv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9g8cv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-src62 to addons-522394
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 1.538s (1.538s including waiting). Image size: 4944818 bytes.
	  Normal  Created    1s    kubelet            Created container hello-world-app
	  Normal  Started    1s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jrfdn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-j7fb2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-522394 describe pod hello-world-app-55bf9c44b4-src62 ingress-nginx-admission-create-jrfdn ingress-nginx-admission-patch-j7fb2: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-522394 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-522394 addons disable ingress-dns --alsologtostderr -v=1: (1.298572599s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-522394 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-522394 addons disable ingress --alsologtostderr -v=1: (7.616871758s)
--- FAIL: TestAddons/parallel/Ingress (151.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (322.21s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.92697ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-cmfs5" [d201f129-cdd9-474b-90ff-b22982035951] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00574208s
addons_test.go:402: (dbg) Run:  kubectl --context addons-522394 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-522394 top pods -n kube-system: exit status 1 (70.362553ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2cr8g, age: 2m11.838961121s

                                                
                                                
** /stderr **
I1202 11:33:36.840657   13299 retry.go:31] will retry after 4.088886311s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-522394 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-522394 top pods -n kube-system: exit status 1 (62.983886ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2cr8g, age: 2m15.991335867s

                                                
                                                
** /stderr **
I1202 11:33:40.993243   13299 retry.go:31] will retry after 4.866116509s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-522394 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-522394 top pods -n kube-system: exit status 1 (67.508199ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-czks8, age: 2m2.926353841s

                                                
                                                
** /stderr **
I1202 11:33:45.928045   13299 retry.go:31] will retry after 4.006380013s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-522394 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-522394 top pods -n kube-system: exit status 1 (63.604119ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-czks8, age: 2m6.996567169s

                                                
                                                
** /stderr **
I1202 11:33:49.998351   13299 retry.go:31] will retry after 14.57239893s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-522394 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-522394 top pods -n kube-system: exit status 1 (63.736462ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2cr8g, age: 2m39.633231178s

                                                
                                                
** /stderr **
I1202 11:34:04.635018   13299 retry.go:31] will retry after 10.529195477s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-522394 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-522394 top pods -n kube-system: exit status 1 (63.532863ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2cr8g, age: 2m50.225679081s

                                                
                                                
** /stderr **
I1202 11:34:15.228055   13299 retry.go:31] will retry after 23.843636843s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-522394 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-522394 top pods -n kube-system: exit status 1 (62.225849ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2cr8g, age: 3m14.133212509s

                                                
                                                
** /stderr **
I1202 11:34:39.135090   13299 retry.go:31] will retry after 19.559482708s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-522394 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-522394 top pods -n kube-system: exit status 1 (59.996299ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2cr8g, age: 3m33.755936012s

                                                
                                                
** /stderr **
I1202 11:34:58.757851   13299 retry.go:31] will retry after 37.953828083s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-522394 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-522394 top pods -n kube-system: exit status 1 (61.779354ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2cr8g, age: 4m11.771905825s

                                                
                                                
** /stderr **
I1202 11:35:36.774313   13299 retry.go:31] will retry after 38.825691417s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-522394 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-522394 top pods -n kube-system: exit status 1 (59.825856ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2cr8g, age: 4m50.65834547s

                                                
                                                
** /stderr **
I1202 11:36:15.660586   13299 retry.go:31] will retry after 1m17.190147153s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-522394 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-522394 top pods -n kube-system: exit status 1 (62.17187ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2cr8g, age: 6m7.913512864s

                                                
                                                
** /stderr **
I1202 11:37:32.915722   13299 retry.go:31] will retry after 39.924526465s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-522394 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-522394 top pods -n kube-system: exit status 1 (61.319546ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2cr8g, age: 6m47.900779781s

                                                
                                                
** /stderr **
I1202 11:38:12.902992   13299 retry.go:31] will retry after 37.556658468s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-522394 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-522394 top pods -n kube-system: exit status 1 (60.79688ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2cr8g, age: 7m25.5233195s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-522394
helpers_test.go:235: (dbg) docker inspect addons-522394:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f1156cea5e57d7643c4d052966bdd6ba07a406f81fdde58aebfdbcde723b215d",
	        "Created": "2024-12-02T11:31:02.743927926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 15359,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-02T11:31:02.885474163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a0bf2062289d31d12b734a031220306d830691a529a6eae8b4c8f4049e20571",
	        "ResolvConfPath": "/var/lib/docker/containers/f1156cea5e57d7643c4d052966bdd6ba07a406f81fdde58aebfdbcde723b215d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f1156cea5e57d7643c4d052966bdd6ba07a406f81fdde58aebfdbcde723b215d/hostname",
	        "HostsPath": "/var/lib/docker/containers/f1156cea5e57d7643c4d052966bdd6ba07a406f81fdde58aebfdbcde723b215d/hosts",
	        "LogPath": "/var/lib/docker/containers/f1156cea5e57d7643c4d052966bdd6ba07a406f81fdde58aebfdbcde723b215d/f1156cea5e57d7643c4d052966bdd6ba07a406f81fdde58aebfdbcde723b215d-json.log",
	        "Name": "/addons-522394",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-522394:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-522394",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/84d765ebcb16cc3968ac44d5bd8ac1c9a7e64095628155f57bbcff42e9990b07-init/diff:/var/lib/docker/overlay2/098fd1b37996620d1394051c0f2d145ec7cc4c66ec7f899bcd76f461df21801b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/84d765ebcb16cc3968ac44d5bd8ac1c9a7e64095628155f57bbcff42e9990b07/merged",
	                "UpperDir": "/var/lib/docker/overlay2/84d765ebcb16cc3968ac44d5bd8ac1c9a7e64095628155f57bbcff42e9990b07/diff",
	                "WorkDir": "/var/lib/docker/overlay2/84d765ebcb16cc3968ac44d5bd8ac1c9a7e64095628155f57bbcff42e9990b07/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-522394",
	                "Source": "/var/lib/docker/volumes/addons-522394/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-522394",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-522394",
	                "name.minikube.sigs.k8s.io": "addons-522394",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a35dc827c374d8139c79717941793e6398ff4a537867124a125cb8259705dcb",
	            "SandboxKey": "/var/run/docker/netns/2a35dc827c37",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-522394": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d38ec22788b795d8d65d951fb8091f29e0367d83fb60ea07791faa029050205d",
	                    "EndpointID": "418b55a9a55b19a3b61941492fd0c85aa323b6e304ba517b05f82505b61c0932",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-522394",
	                        "f1156cea5e57"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-522394 -n addons-522394
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-522394 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-522394 logs -n 25: (1.096822804s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-535118                                                                   | download-docker-535118 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-422651   | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC |                     |
	|         | binary-mirror-422651                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43737                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-422651                                                                     | binary-mirror-422651   | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
	| addons  | disable dashboard -p                                                                        | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC |                     |
	|         | addons-522394                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC |                     |
	|         | addons-522394                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-522394 --wait=true                                                                | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:33 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-522394 addons disable                                                                | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-522394 addons disable                                                                | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | -p addons-522394                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-522394 addons                                                                        | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-522394 addons disable                                                                | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-522394 ip                                                                            | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	| addons  | addons-522394 addons disable                                                                | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-522394 addons disable                                                                | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-522394 ssh curl -s                                                                   | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ssh     | addons-522394 ssh cat                                                                       | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | /opt/local-path-provisioner/pvc-8f0db6fc-4610-41c7-b84f-75a28b3ebb7d_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-522394 addons disable                                                                | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-522394 addons disable                                                                | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:34 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-522394 addons                                                                        | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-522394 addons                                                                        | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:34 UTC | 02 Dec 24 11:34 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-522394 addons                                                                        | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:34 UTC | 02 Dec 24 11:34 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-522394 addons                                                                        | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:34 UTC | 02 Dec 24 11:34 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-522394 ip                                                                            | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:36 UTC | 02 Dec 24 11:36 UTC |
	| addons  | addons-522394 addons disable                                                                | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:36 UTC | 02 Dec 24 11:36 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-522394 addons disable                                                                | addons-522394          | jenkins | v1.34.0 | 02 Dec 24 11:36 UTC | 02 Dec 24 11:36 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 11:30:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 11:30:38.448354   14602 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:30:38.448475   14602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:30:38.448485   14602 out.go:358] Setting ErrFile to fd 2...
	I1202 11:30:38.448490   14602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:30:38.448693   14602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6540/.minikube/bin
	I1202 11:30:38.449265   14602 out.go:352] Setting JSON to false
	I1202 11:30:38.450170   14602 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":789,"bootTime":1733138249,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 11:30:38.450278   14602 start.go:139] virtualization: kvm guest
	I1202 11:30:38.452636   14602 out.go:177] * [addons-522394] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 11:30:38.454356   14602 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 11:30:38.454359   14602 notify.go:220] Checking for updates...
	I1202 11:30:38.457297   14602 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 11:30:38.458908   14602 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6540/kubeconfig
	I1202 11:30:38.460359   14602 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6540/.minikube
	I1202 11:30:38.461756   14602 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 11:30:38.463240   14602 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 11:30:38.464726   14602 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 11:30:38.486735   14602 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1202 11:30:38.486831   14602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 11:30:38.531909   14602 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-02 11:30:38.523449877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 11:30:38.531999   14602 docker.go:318] overlay module found
	I1202 11:30:38.534142   14602 out.go:177] * Using the docker driver based on user configuration
	I1202 11:30:38.535539   14602 start.go:297] selected driver: docker
	I1202 11:30:38.535555   14602 start.go:901] validating driver "docker" against <nil>
	I1202 11:30:38.535565   14602 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 11:30:38.536412   14602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 11:30:38.581147   14602 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-02 11:30:38.572979487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 11:30:38.581348   14602 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1202 11:30:38.581621   14602 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:30:38.583502   14602 out.go:177] * Using Docker driver with root privileges
	I1202 11:30:38.584779   14602 cni.go:84] Creating CNI manager for ""
	I1202 11:30:38.584858   14602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 11:30:38.584869   14602 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 11:30:38.584937   14602 start.go:340] cluster config:
	{Name:addons-522394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-522394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:30:38.586374   14602 out.go:177] * Starting "addons-522394" primary control-plane node in "addons-522394" cluster
	I1202 11:30:38.587531   14602 cache.go:121] Beginning downloading kic base image for docker with crio
	I1202 11:30:38.588692   14602 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1202 11:30:38.589859   14602 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:30:38.589886   14602 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1202 11:30:38.589915   14602 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 11:30:38.589927   14602 cache.go:56] Caching tarball of preloaded images
	I1202 11:30:38.590032   14602 preload.go:172] Found /home/jenkins/minikube-integration/20033-6540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:30:38.590045   14602 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:30:38.590417   14602 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/config.json ...
	I1202 11:30:38.590442   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/config.json: {Name:mk4bd885db87af2c06fd1da748cdd3f6e169fab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:30:38.605457   14602 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1202 11:30:38.605602   14602 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1202 11:30:38.605628   14602 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1202 11:30:38.605638   14602 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1202 11:30:38.605651   14602 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1202 11:30:38.605661   14602 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from local cache
	I1202 11:30:50.373643   14602 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from cached tarball
	I1202 11:30:50.373681   14602 cache.go:194] Successfully downloaded all kic artifacts
	I1202 11:30:50.373730   14602 start.go:360] acquireMachinesLock for addons-522394: {Name:mke96f53f0edd6a6d51035c4d22fed40662473b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:30:50.373856   14602 start.go:364] duration metric: took 86.059µs to acquireMachinesLock for "addons-522394"
	I1202 11:30:50.373892   14602 start.go:93] Provisioning new machine with config: &{Name:addons-522394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-522394 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:30:50.373977   14602 start.go:125] createHost starting for "" (driver="docker")
	I1202 11:30:50.376042   14602 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1202 11:30:50.376367   14602 start.go:159] libmachine.API.Create for "addons-522394" (driver="docker")
	I1202 11:30:50.376408   14602 client.go:168] LocalClient.Create starting
	I1202 11:30:50.376529   14602 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem
	I1202 11:30:50.529702   14602 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/cert.pem
	I1202 11:30:50.716354   14602 cli_runner.go:164] Run: docker network inspect addons-522394 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1202 11:30:50.732411   14602 cli_runner.go:211] docker network inspect addons-522394 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1202 11:30:50.732483   14602 network_create.go:284] running [docker network inspect addons-522394] to gather additional debugging logs...
	I1202 11:30:50.732508   14602 cli_runner.go:164] Run: docker network inspect addons-522394
	W1202 11:30:50.748560   14602 cli_runner.go:211] docker network inspect addons-522394 returned with exit code 1
	I1202 11:30:50.748593   14602 network_create.go:287] error running [docker network inspect addons-522394]: docker network inspect addons-522394: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-522394 not found
	I1202 11:30:50.748606   14602 network_create.go:289] output of [docker network inspect addons-522394]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-522394 not found
	
	** /stderr **
	I1202 11:30:50.748739   14602 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 11:30:50.765326   14602 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cda8f0}
	I1202 11:30:50.765369   14602 network_create.go:124] attempt to create docker network addons-522394 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1202 11:30:50.765416   14602 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-522394 addons-522394
	I1202 11:30:50.824188   14602 network_create.go:108] docker network addons-522394 192.168.49.0/24 created
	I1202 11:30:50.824218   14602 kic.go:121] calculated static IP "192.168.49.2" for the "addons-522394" container
	I1202 11:30:50.824296   14602 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1202 11:30:50.839741   14602 cli_runner.go:164] Run: docker volume create addons-522394 --label name.minikube.sigs.k8s.io=addons-522394 --label created_by.minikube.sigs.k8s.io=true
	I1202 11:30:50.857124   14602 oci.go:103] Successfully created a docker volume addons-522394
	I1202 11:30:50.857208   14602 cli_runner.go:164] Run: docker run --rm --name addons-522394-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-522394 --entrypoint /usr/bin/test -v addons-522394:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib
	I1202 11:30:57.970270   14602 cli_runner.go:217] Completed: docker run --rm --name addons-522394-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-522394 --entrypoint /usr/bin/test -v addons-522394:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib: (7.1130193s)
	I1202 11:30:57.970302   14602 oci.go:107] Successfully prepared a docker volume addons-522394
	I1202 11:30:57.970321   14602 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:30:57.970343   14602 kic.go:194] Starting extracting preloaded images to volume ...
	I1202 11:30:57.970408   14602 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20033-6540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-522394:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir
	I1202 11:31:02.677696   14602 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20033-6540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-522394:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.707240494s)
	I1202 11:31:02.677724   14602 kic.go:203] duration metric: took 4.707379075s to extract preloaded images to volume ...
	W1202 11:31:02.677851   14602 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1202 11:31:02.677943   14602 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1202 11:31:02.728955   14602 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-522394 --name addons-522394 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-522394 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-522394 --network addons-522394 --ip 192.168.49.2 --volume addons-522394:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615
	I1202 11:31:03.051385   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Running}}
	I1202 11:31:03.069862   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:03.089267   14602 cli_runner.go:164] Run: docker exec addons-522394 stat /var/lib/dpkg/alternatives/iptables
	I1202 11:31:03.133462   14602 oci.go:144] the created container "addons-522394" has a running status.
	I1202 11:31:03.133488   14602 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa...
	I1202 11:31:03.205441   14602 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1202 11:31:03.226382   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:03.243035   14602 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1202 11:31:03.243057   14602 kic_runner.go:114] Args: [docker exec --privileged addons-522394 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1202 11:31:03.283614   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:03.306449   14602 machine.go:93] provisionDockerMachine start ...
	I1202 11:31:03.306537   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:03.324692   14602 main.go:141] libmachine: Using SSH client type: native
	I1202 11:31:03.324969   14602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1202 11:31:03.324988   14602 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 11:31:03.325641   14602 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53152->127.0.0.1:32768: read: connection reset by peer
	I1202 11:31:06.460018   14602 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-522394
	
	I1202 11:31:06.460054   14602 ubuntu.go:169] provisioning hostname "addons-522394"
	I1202 11:31:06.460119   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:06.476971   14602 main.go:141] libmachine: Using SSH client type: native
	I1202 11:31:06.477195   14602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1202 11:31:06.477211   14602 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-522394 && echo "addons-522394" | sudo tee /etc/hostname
	I1202 11:31:06.610900   14602 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-522394
	
	I1202 11:31:06.610971   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:06.627751   14602 main.go:141] libmachine: Using SSH client type: native
	I1202 11:31:06.627915   14602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1202 11:31:06.627932   14602 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-522394' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-522394/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-522394' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:31:06.752451   14602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:31:06.752475   14602 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6540/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6540/.minikube}
	I1202 11:31:06.752494   14602 ubuntu.go:177] setting up certificates
	I1202 11:31:06.752508   14602 provision.go:84] configureAuth start
	I1202 11:31:06.752566   14602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-522394
	I1202 11:31:06.769244   14602 provision.go:143] copyHostCerts
	I1202 11:31:06.769343   14602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6540/.minikube/ca.pem (1078 bytes)
	I1202 11:31:06.769463   14602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6540/.minikube/cert.pem (1123 bytes)
	I1202 11:31:06.769519   14602 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6540/.minikube/key.pem (1679 bytes)
	I1202 11:31:06.769568   14602 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca-key.pem org=jenkins.addons-522394 san=[127.0.0.1 192.168.49.2 addons-522394 localhost minikube]
	I1202 11:31:07.084157   14602 provision.go:177] copyRemoteCerts
	I1202 11:31:07.084212   14602 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:31:07.084248   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:07.101169   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:07.196572   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 11:31:07.218457   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 11:31:07.239828   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 11:31:07.261963   14602 provision.go:87] duration metric: took 509.439437ms to configureAuth
	I1202 11:31:07.262000   14602 ubuntu.go:193] setting minikube options for container-runtime
	I1202 11:31:07.262203   14602 config.go:182] Loaded profile config "addons-522394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:31:07.262326   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:07.279559   14602 main.go:141] libmachine: Using SSH client type: native
	I1202 11:31:07.279733   14602 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1202 11:31:07.279747   14602 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:31:07.490475   14602 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:31:07.490504   14602 machine.go:96] duration metric: took 4.184034941s to provisionDockerMachine
	I1202 11:31:07.490520   14602 client.go:171] duration metric: took 17.114098916s to LocalClient.Create
	I1202 11:31:07.490543   14602 start.go:167] duration metric: took 17.114178962s to libmachine.API.Create "addons-522394"
	I1202 11:31:07.490554   14602 start.go:293] postStartSetup for "addons-522394" (driver="docker")
	I1202 11:31:07.490568   14602 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:31:07.490632   14602 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:31:07.490684   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:07.507554   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:07.600753   14602 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:31:07.603609   14602 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 11:31:07.603637   14602 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1202 11:31:07.603645   14602 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1202 11:31:07.603652   14602 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1202 11:31:07.603662   14602 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6540/.minikube/addons for local assets ...
	I1202 11:31:07.603715   14602 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6540/.minikube/files for local assets ...
	I1202 11:31:07.603745   14602 start.go:296] duration metric: took 113.184134ms for postStartSetup
	I1202 11:31:07.603993   14602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-522394
	I1202 11:31:07.620607   14602 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/config.json ...
	I1202 11:31:07.620846   14602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 11:31:07.620881   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:07.637027   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:07.724920   14602 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 11:31:07.729164   14602 start.go:128] duration metric: took 17.355169836s to createHost
	I1202 11:31:07.729191   14602 start.go:83] releasing machines lock for "addons-522394", held for 17.35531727s
	I1202 11:31:07.729268   14602 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-522394
	I1202 11:31:07.745687   14602 ssh_runner.go:195] Run: cat /version.json
	I1202 11:31:07.745755   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:07.745770   14602 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:31:07.745823   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:07.763513   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:07.763804   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:07.926280   14602 ssh_runner.go:195] Run: systemctl --version
	I1202 11:31:07.930164   14602 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:31:08.064967   14602 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 11:31:08.069163   14602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:31:08.087042   14602 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1202 11:31:08.087170   14602 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:31:08.113184   14602 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1202 11:31:08.113205   14602 start.go:495] detecting cgroup driver to use...
	I1202 11:31:08.113235   14602 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 11:31:08.113270   14602 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:31:08.126531   14602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:31:08.136665   14602 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:31:08.136710   14602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:31:08.148927   14602 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:31:08.162040   14602 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:31:08.246630   14602 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:31:08.321564   14602 docker.go:233] disabling docker service ...
	I1202 11:31:08.321614   14602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:31:08.337907   14602 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:31:08.348550   14602 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:31:08.426490   14602 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:31:08.506554   14602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:31:08.516948   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:31:08.531489   14602 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:31:08.531545   14602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:08.540668   14602 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:31:08.540725   14602 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:08.549633   14602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:08.558699   14602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:08.567611   14602 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:31:08.576294   14602 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:08.585011   14602 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:08.599023   14602 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:31:08.607899   14602 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:31:08.615807   14602 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 11:31:08.615875   14602 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 11:31:08.629171   14602 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:31:08.637882   14602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:31:08.716908   14602 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:31:08.818193   14602 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:31:08.818269   14602 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:31:08.821499   14602 start.go:563] Will wait 60s for crictl version
	I1202 11:31:08.821549   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:31:08.824914   14602 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:31:08.855819   14602 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1202 11:31:08.855920   14602 ssh_runner.go:195] Run: crio --version
	I1202 11:31:08.888826   14602 ssh_runner.go:195] Run: crio --version
	I1202 11:31:08.924228   14602 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1202 11:31:08.925675   14602 cli_runner.go:164] Run: docker network inspect addons-522394 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 11:31:08.942451   14602 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 11:31:08.945891   14602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:31:08.955847   14602 kubeadm.go:883] updating cluster {Name:addons-522394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-522394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 11:31:08.955958   14602 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:31:08.956004   14602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:31:09.022776   14602 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 11:31:09.022803   14602 crio.go:433] Images already preloaded, skipping extraction
	I1202 11:31:09.022851   14602 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:31:09.053133   14602 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 11:31:09.053155   14602 cache_images.go:84] Images are preloaded, skipping loading
	I1202 11:31:09.053163   14602 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1202 11:31:09.053246   14602 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-522394 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-522394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:31:09.053314   14602 ssh_runner.go:195] Run: crio config
	I1202 11:31:09.095154   14602 cni.go:84] Creating CNI manager for ""
	I1202 11:31:09.095175   14602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 11:31:09.095185   14602 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 11:31:09.095205   14602 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-522394 NodeName:addons-522394 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 11:31:09.095322   14602 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-522394"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 11:31:09.095379   14602 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:31:09.103556   14602 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 11:31:09.103620   14602 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 11:31:09.111562   14602 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 11:31:09.127615   14602 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:31:09.143980   14602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1202 11:31:09.160304   14602 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1202 11:31:09.163655   14602 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:31:09.173716   14602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:31:09.245329   14602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:31:09.257689   14602 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394 for IP: 192.168.49.2
	I1202 11:31:09.257724   14602 certs.go:194] generating shared ca certs ...
	I1202 11:31:09.257739   14602 certs.go:226] acquiring lock for ca certs: {Name:mkb9f54a1a5b06ba02335d6260145758dc70e4bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:09.257867   14602 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.key
	I1202 11:31:09.469731   14602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt ...
	I1202 11:31:09.469764   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt: {Name:mk4ae91dfc26d7153230fe2d9cab66a79015108a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:09.469961   14602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6540/.minikube/ca.key ...
	I1202 11:31:09.469973   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/ca.key: {Name:mkd438dac45f54961607e644fe9baf5d15ef9f63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:09.470048   14602 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.key
	I1202 11:31:09.594064   14602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.crt ...
	I1202 11:31:09.594101   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.crt: {Name:mk76b36b478ee22df66ae14e5403698cb715b005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:09.594292   14602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.key ...
	I1202 11:31:09.594305   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.key: {Name:mk26bd14e98e8bc68bd181b692e23db7a5175adf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:09.594386   14602 certs.go:256] generating profile certs ...
	I1202 11:31:09.594447   14602 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.key
	I1202 11:31:09.594472   14602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt with IP's: []
	I1202 11:31:09.751460   14602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt ...
	I1202 11:31:09.751496   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: {Name:mk2e66010c7db27c0dace19df49014b2d0afb6c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:09.751672   14602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.key ...
	I1202 11:31:09.751680   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.key: {Name:mkca5e7485085a3453a54b4745cfc443fdaeaf21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:09.751755   14602 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.key.2a96479c
	I1202 11:31:09.751773   14602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.crt.2a96479c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1202 11:31:10.101761   14602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.crt.2a96479c ...
	I1202 11:31:10.101789   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.crt.2a96479c: {Name:mkcc418856c5eb273401a18c95c72fba1024ade2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:10.101937   14602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.key.2a96479c ...
	I1202 11:31:10.101950   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.key.2a96479c: {Name:mk534a25778866ef4232d4034b1d474493bfe2ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:10.102019   14602 certs.go:381] copying /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.crt.2a96479c -> /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.crt
	I1202 11:31:10.102094   14602 certs.go:385] copying /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.key.2a96479c -> /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.key
	I1202 11:31:10.102139   14602 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/proxy-client.key
	I1202 11:31:10.102156   14602 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/proxy-client.crt with IP's: []
	I1202 11:31:10.432510   14602 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/proxy-client.crt ...
	I1202 11:31:10.432548   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/proxy-client.crt: {Name:mkb33877c82f9fd153d1621054c6f7a99b6da53e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:10.432729   14602 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/proxy-client.key ...
	I1202 11:31:10.432740   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/proxy-client.key: {Name:mk635a5916d42202e3b8acae8ce56111092d49f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:10.432911   14602 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:31:10.432946   14602 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem (1078 bytes)
	I1202 11:31:10.432971   14602 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:31:10.432999   14602 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/key.pem (1679 bytes)
	I1202 11:31:10.433625   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:31:10.455890   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:31:10.477324   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:31:10.498778   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 11:31:10.519811   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 11:31:10.540521   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 11:31:10.561394   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 11:31:10.582528   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 11:31:10.603242   14602 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:31:10.623859   14602 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 11:31:10.639406   14602 ssh_runner.go:195] Run: openssl version
	I1202 11:31:10.644391   14602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:31:10.652798   14602 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:31:10.655792   14602 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:31:10.655836   14602 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:31:10.662042   14602 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:31:10.670503   14602 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:31:10.673440   14602 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 11:31:10.673489   14602 kubeadm.go:392] StartCluster: {Name:addons-522394 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-522394 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:31:10.673575   14602 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 11:31:10.673637   14602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 11:31:10.704726   14602 cri.go:89] found id: ""
	I1202 11:31:10.704785   14602 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 11:31:10.712701   14602 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 11:31:10.720604   14602 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1202 11:31:10.720651   14602 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 11:31:10.728353   14602 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 11:31:10.728371   14602 kubeadm.go:157] found existing configuration files:
	
	I1202 11:31:10.728405   14602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 11:31:10.735841   14602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 11:31:10.735885   14602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 11:31:10.743324   14602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 11:31:10.750796   14602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 11:31:10.750852   14602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 11:31:10.758231   14602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 11:31:10.765683   14602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 11:31:10.765733   14602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 11:31:10.772915   14602 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 11:31:10.780474   14602 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 11:31:10.780536   14602 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 11:31:10.787828   14602 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1202 11:31:10.840908   14602 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1071-gcp\n", err: exit status 1
	I1202 11:31:10.892595   14602 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 11:31:20.085625   14602 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1202 11:31:20.085707   14602 kubeadm.go:310] [preflight] Running pre-flight checks
	I1202 11:31:20.085878   14602 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1202 11:31:20.085980   14602 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1071-gcp
	I1202 11:31:20.086026   14602 kubeadm.go:310] OS: Linux
	I1202 11:31:20.086094   14602 kubeadm.go:310] CGROUPS_CPU: enabled
	I1202 11:31:20.086153   14602 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1202 11:31:20.086237   14602 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1202 11:31:20.086305   14602 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1202 11:31:20.086372   14602 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1202 11:31:20.086427   14602 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1202 11:31:20.086467   14602 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1202 11:31:20.086511   14602 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1202 11:31:20.086551   14602 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1202 11:31:20.086616   14602 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 11:31:20.086695   14602 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 11:31:20.086783   14602 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 11:31:20.086872   14602 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 11:31:20.088662   14602 out.go:235]   - Generating certificates and keys ...
	I1202 11:31:20.088745   14602 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1202 11:31:20.088805   14602 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1202 11:31:20.088883   14602 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 11:31:20.088968   14602 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1202 11:31:20.089050   14602 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1202 11:31:20.089131   14602 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1202 11:31:20.089229   14602 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1202 11:31:20.089416   14602 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-522394 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 11:31:20.089492   14602 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1202 11:31:20.089625   14602 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-522394 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1202 11:31:20.089704   14602 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 11:31:20.089767   14602 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 11:31:20.089825   14602 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1202 11:31:20.089900   14602 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 11:31:20.089944   14602 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 11:31:20.090018   14602 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 11:31:20.090128   14602 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 11:31:20.090237   14602 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 11:31:20.090334   14602 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 11:31:20.090453   14602 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 11:31:20.090512   14602 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 11:31:20.092021   14602 out.go:235]   - Booting up control plane ...
	I1202 11:31:20.092123   14602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 11:31:20.092189   14602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 11:31:20.092252   14602 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 11:31:20.092380   14602 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 11:31:20.092461   14602 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 11:31:20.092522   14602 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1202 11:31:20.092694   14602 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 11:31:20.092846   14602 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 11:31:20.092911   14602 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001512105s
	I1202 11:31:20.092976   14602 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1202 11:31:20.093027   14602 kubeadm.go:310] [api-check] The API server is healthy after 4.002254275s
	I1202 11:31:20.093116   14602 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 11:31:20.093240   14602 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 11:31:20.093311   14602 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 11:31:20.093517   14602 kubeadm.go:310] [mark-control-plane] Marking the node addons-522394 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 11:31:20.093604   14602 kubeadm.go:310] [bootstrap-token] Using token: eeeqcp.fral8wgnp9vy03i0
	I1202 11:31:20.095084   14602 out.go:235]   - Configuring RBAC rules ...
	I1202 11:31:20.095188   14602 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 11:31:20.095277   14602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 11:31:20.095437   14602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 11:31:20.095553   14602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 11:31:20.095662   14602 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 11:31:20.095759   14602 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 11:31:20.095891   14602 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 11:31:20.095958   14602 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1202 11:31:20.096035   14602 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1202 11:31:20.096046   14602 kubeadm.go:310] 
	I1202 11:31:20.096115   14602 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1202 11:31:20.096123   14602 kubeadm.go:310] 
	I1202 11:31:20.096214   14602 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1202 11:31:20.096224   14602 kubeadm.go:310] 
	I1202 11:31:20.096259   14602 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1202 11:31:20.096359   14602 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 11:31:20.096410   14602 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 11:31:20.096416   14602 kubeadm.go:310] 
	I1202 11:31:20.096478   14602 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1202 11:31:20.096488   14602 kubeadm.go:310] 
	I1202 11:31:20.096555   14602 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 11:31:20.096564   14602 kubeadm.go:310] 
	I1202 11:31:20.096639   14602 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1202 11:31:20.096748   14602 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 11:31:20.096846   14602 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 11:31:20.096855   14602 kubeadm.go:310] 
	I1202 11:31:20.096975   14602 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 11:31:20.097087   14602 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1202 11:31:20.097099   14602 kubeadm.go:310] 
	I1202 11:31:20.097231   14602 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token eeeqcp.fral8wgnp9vy03i0 \
	I1202 11:31:20.097384   14602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f7d4bd58f5eb8fb1f0363979e5ea4d5bcf2e37268538de75315f476aceafe2e5 \
	I1202 11:31:20.097415   14602 kubeadm.go:310] 	--control-plane 
	I1202 11:31:20.097421   14602 kubeadm.go:310] 
	I1202 11:31:20.097586   14602 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1202 11:31:20.097605   14602 kubeadm.go:310] 
	I1202 11:31:20.097721   14602 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token eeeqcp.fral8wgnp9vy03i0 \
	I1202 11:31:20.097862   14602 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f7d4bd58f5eb8fb1f0363979e5ea4d5bcf2e37268538de75315f476aceafe2e5 
	I1202 11:31:20.097876   14602 cni.go:84] Creating CNI manager for ""
	I1202 11:31:20.097894   14602 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 11:31:20.100524   14602 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1202 11:31:20.101719   14602 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1202 11:31:20.105483   14602 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1202 11:31:20.105497   14602 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1202 11:31:20.122736   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1202 11:31:20.310027   14602 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 11:31:20.310147   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:20.310150   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-522394 minikube.k8s.io/updated_at=2024_12_02T11_31_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=addons-522394 minikube.k8s.io/primary=true
	I1202 11:31:20.404480   14602 ops.go:34] apiserver oom_adj: -16
	I1202 11:31:20.404481   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:20.904676   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:21.405592   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:21.905501   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:22.405118   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:22.905274   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:23.405210   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:23.905505   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:24.404953   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:24.904872   14602 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 11:31:24.968651   14602 kubeadm.go:1113] duration metric: took 4.65856046s to wait for elevateKubeSystemPrivileges
	I1202 11:31:24.968693   14602 kubeadm.go:394] duration metric: took 14.295207991s to StartCluster
	I1202 11:31:24.968714   14602 settings.go:142] acquiring lock: {Name:mkd94da5b026832ad8b1eceae7944b5245757344 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:24.968820   14602 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6540/kubeconfig
	I1202 11:31:24.969361   14602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/kubeconfig: {Name:mk5ee3d9b6afe00d14254b3bb7ff913980280999 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:31:24.969698   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 11:31:24.970073   14602 config.go:182] Loaded profile config "addons-522394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:31:24.969863   14602 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:31:24.970200   14602 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1202 11:31:24.970316   14602 addons.go:69] Setting yakd=true in profile "addons-522394"
	I1202 11:31:24.970365   14602 addons.go:234] Setting addon yakd=true in "addons-522394"
	I1202 11:31:24.970410   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.970928   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:24.971013   14602 addons.go:69] Setting inspektor-gadget=true in profile "addons-522394"
	I1202 11:31:24.971053   14602 addons.go:234] Setting addon inspektor-gadget=true in "addons-522394"
	I1202 11:31:24.971092   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.971139   14602 addons.go:69] Setting storage-provisioner=true in profile "addons-522394"
	I1202 11:31:24.971167   14602 addons.go:234] Setting addon storage-provisioner=true in "addons-522394"
	I1202 11:31:24.971206   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.971524   14602 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-522394"
	I1202 11:31:24.971543   14602 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-522394"
	I1202 11:31:24.971734   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:24.971803   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:24.971952   14602 addons.go:69] Setting volumesnapshots=true in profile "addons-522394"
	I1202 11:31:24.971979   14602 addons.go:69] Setting ingress=true in profile "addons-522394"
	I1202 11:31:24.971996   14602 addons.go:234] Setting addon volumesnapshots=true in "addons-522394"
	I1202 11:31:24.971989   14602 addons.go:69] Setting ingress-dns=true in profile "addons-522394"
	I1202 11:31:24.972015   14602 addons.go:234] Setting addon ingress=true in "addons-522394"
	I1202 11:31:24.972015   14602 addons.go:234] Setting addon ingress-dns=true in "addons-522394"
	I1202 11:31:24.972030   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.972054   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.972064   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.972091   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:24.972517   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:24.972533   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:24.972551   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:24.972837   14602 addons.go:69] Setting default-storageclass=true in profile "addons-522394"
	I1202 11:31:24.972927   14602 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-522394"
	I1202 11:31:24.973022   14602 addons.go:69] Setting volcano=true in profile "addons-522394"
	I1202 11:31:24.973064   14602 addons.go:234] Setting addon volcano=true in "addons-522394"
	I1202 11:31:24.973090   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.973279   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:24.973518   14602 addons.go:69] Setting metrics-server=true in profile "addons-522394"
	I1202 11:31:24.973540   14602 addons.go:234] Setting addon metrics-server=true in "addons-522394"
	I1202 11:31:24.973718   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.973933   14602 addons.go:69] Setting cloud-spanner=true in profile "addons-522394"
	I1202 11:31:24.973958   14602 addons.go:234] Setting addon cloud-spanner=true in "addons-522394"
	I1202 11:31:24.974064   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.974209   14602 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-522394"
	I1202 11:31:24.974257   14602 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-522394"
	I1202 11:31:24.974288   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.974347   14602 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-522394"
	I1202 11:31:24.974390   14602 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-522394"
	I1202 11:31:24.974417   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.974562   14602 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-522394"
	I1202 11:31:24.974639   14602 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-522394"
	I1202 11:31:24.974669   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.976377   14602 out.go:177] * Verifying Kubernetes components...
	I1202 11:31:24.976434   14602 addons.go:69] Setting gcp-auth=true in profile "addons-522394"
	I1202 11:31:24.976471   14602 mustload.go:65] Loading cluster: addons-522394
	I1202 11:31:24.976503   14602 addons.go:69] Setting registry=true in profile "addons-522394"
	I1202 11:31:24.976517   14602 addons.go:234] Setting addon registry=true in "addons-522394"
	I1202 11:31:24.976554   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:24.976667   14602 config.go:182] Loaded profile config "addons-522394": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:31:24.976916   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:24.977067   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:24.978174   14602 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:31:25.007010   14602 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1202 11:31:25.007064   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:25.009062   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:25.009964   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:25.010195   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:25.010264   14602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1202 11:31:25.010770   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:25.011175   14602 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 11:31:25.012723   14602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1202 11:31:25.012848   14602 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 11:31:25.012857   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 11:31:25.012890   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.014528   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:25.015505   14602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1202 11:31:25.017689   14602 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 11:31:25.017713   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1202 11:31:25.017771   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.025206   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:25.026333   14602 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1202 11:31:25.027226   14602 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-522394"
	I1202 11:31:25.027271   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:25.027639   14602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1202 11:31:25.027658   14602 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1202 11:31:25.027713   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.027721   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:25.010381   14602 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1202 11:31:25.028373   14602 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1202 11:31:25.028420   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.029870   14602 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1202 11:31:25.031003   14602 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1202 11:31:25.031019   14602 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1202 11:31:25.031078   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.058764   14602 addons.go:234] Setting addon default-storageclass=true in "addons-522394"
	I1202 11:31:25.058814   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:25.059322   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:25.076138   14602 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1202 11:31:25.077606   14602 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 11:31:25.077628   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1202 11:31:25.077691   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.079352   14602 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1202 11:31:25.080781   14602 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 11:31:25.080800   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1202 11:31:25.080854   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.083563   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.084364   14602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1202 11:31:25.085116   14602 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1202 11:31:25.086249   14602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1202 11:31:25.086265   14602 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 11:31:25.086283   14602 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 11:31:25.086346   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.088475   14602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1202 11:31:25.088520   14602 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1202 11:31:25.090012   14602 out.go:177]   - Using image docker.io/registry:2.8.3
	I1202 11:31:25.092861   14602 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1202 11:31:25.092886   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1202 11:31:25.092939   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.094274   14602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1202 11:31:25.095262   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.096792   14602 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	W1202 11:31:25.097369   14602 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1202 11:31:25.099405   14602 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1202 11:31:25.104557   14602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1202 11:31:25.106712   14602 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1202 11:31:25.109295   14602 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1202 11:31:25.113070   14602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1202 11:31:25.113097   14602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1202 11:31:25.113158   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.117799   14602 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 11:31:25.117825   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1202 11:31:25.117887   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.121355   14602 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 11:31:25.121377   14602 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 11:31:25.121442   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.122272   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.124732   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.125907   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.133507   14602 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1202 11:31:25.135815   14602 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1202 11:31:25.136180   14602 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1202 11:31:25.136205   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1202 11:31:25.136278   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.144477   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.148374   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.150230   14602 out.go:177]   - Using image docker.io/busybox:stable
	I1202 11:31:25.151517   14602 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 11:31:25.151538   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1202 11:31:25.151594   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:25.160602   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.160811   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.162571   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.163771   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.165712   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.167710   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.177069   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:25.204311   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 11:31:25.408194   14602 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:31:25.522853   14602 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1202 11:31:25.522884   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1202 11:31:25.523058   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 11:31:25.701660   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 11:31:25.702245   14602 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1202 11:31:25.702305   14602 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1202 11:31:25.710690   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1202 11:31:25.713077   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 11:31:25.716483   14602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1202 11:31:25.716509   14602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1202 11:31:25.718076   14602 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1202 11:31:25.718094   14602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1202 11:31:25.801825   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 11:31:25.802151   14602 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1202 11:31:25.802173   14602 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1202 11:31:25.803105   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1202 11:31:25.810173   14602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 11:31:25.810266   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1202 11:31:25.812388   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 11:31:25.911687   14602 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1202 11:31:25.911776   14602 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1202 11:31:25.918266   14602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1202 11:31:25.918292   14602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1202 11:31:25.918899   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 11:31:26.001725   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 11:31:26.003520   14602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1202 11:31:26.003594   14602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1202 11:31:26.018457   14602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 11:31:26.018484   14602 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 11:31:26.101997   14602 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1202 11:31:26.102048   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1202 11:31:26.316019   14602 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1202 11:31:26.316110   14602 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1202 11:31:26.410849   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1202 11:31:26.420301   14602 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1202 11:31:26.420384   14602 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1202 11:31:26.423105   14602 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 11:31:26.423165   14602 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 11:31:26.506346   14602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1202 11:31:26.506451   14602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1202 11:31:26.511070   14602 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.306713595s)
	I1202 11:31:26.511158   14602 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1202 11:31:26.512423   14602 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.104198014s)
	I1202 11:31:26.513351   14602 node_ready.go:35] waiting up to 6m0s for node "addons-522394" to be "Ready" ...
	I1202 11:31:26.701685   14602 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1202 11:31:26.701766   14602 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1202 11:31:26.716987   14602 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1202 11:31:26.717013   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1202 11:31:26.801942   14602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1202 11:31:26.801990   14602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1202 11:31:26.818566   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 11:31:27.005561   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1202 11:31:27.100901   14602 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 11:31:27.100935   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1202 11:31:27.109529   14602 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1202 11:31:27.109554   14602 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1202 11:31:27.306082   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 11:31:27.614206   14602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1202 11:31:27.614283   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1202 11:31:27.702826   14602 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-522394" context rescaled to 1 replicas
	I1202 11:31:28.004344   14602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1202 11:31:28.004450   14602 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1202 11:31:28.216668   14602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1202 11:31:28.216896   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1202 11:31:28.418686   14602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1202 11:31:28.418775   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1202 11:31:28.810816   14602 node_ready.go:53] node "addons-522394" has status "Ready":"False"
	I1202 11:31:28.913851   14602 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 11:31:28.913879   14602 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1202 11:31:29.119635   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 11:31:29.327064   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.803972764s)
	I1202 11:31:31.026963   14602 node_ready.go:53] node "addons-522394" has status "Ready":"False"
	I1202 11:31:31.204719   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.502962196s)
	I1202 11:31:31.204774   14602 addons.go:475] Verifying addon ingress=true in "addons-522394"
	I1202 11:31:31.204809   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.494078658s)
	I1202 11:31:31.204883   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.491780344s)
	I1202 11:31:31.204962   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.403052137s)
	I1202 11:31:31.205078   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.401900391s)
	I1202 11:31:31.205108   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.392648077s)
	I1202 11:31:31.205167   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.286247631s)
	I1202 11:31:31.205624   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.203781972s)
	I1202 11:31:31.205680   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.794733783s)
	I1202 11:31:31.205713   14602 addons.go:475] Verifying addon registry=true in "addons-522394"
	I1202 11:31:31.205971   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.200311107s)
	I1202 11:31:31.206109   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.387237663s)
	I1202 11:31:31.206138   14602 addons.go:475] Verifying addon metrics-server=true in "addons-522394"
	I1202 11:31:31.207542   14602 out.go:177] * Verifying registry addon...
	I1202 11:31:31.208077   14602 out.go:177] * Verifying ingress addon...
	I1202 11:31:31.208118   14602 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-522394 service yakd-dashboard -n yakd-dashboard
	
	I1202 11:31:31.209779   14602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1202 11:31:31.210924   14602 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1202 11:31:31.217586   14602 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 11:31:31.217608   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:31.217794   14602 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1202 11:31:31.217815   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1202 11:31:31.225831   14602 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1202 11:31:31.716035   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:31.815740   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:32.006766   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.700624245s)
	W1202 11:31:32.006820   14602 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 11:31:32.006845   14602 retry.go:31] will retry after 356.321884ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 11:31:32.214037   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:32.215538   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:32.234579   14602 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1202 11:31:32.234655   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:32.255563   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:32.364190   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 11:31:32.423627   14602 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1202 11:31:32.505853   14602 addons.go:234] Setting addon gcp-auth=true in "addons-522394"
	I1202 11:31:32.505914   14602 host.go:66] Checking if "addons-522394" exists ...
	I1202 11:31:32.506431   14602 cli_runner.go:164] Run: docker container inspect addons-522394 --format={{.State.Status}}
	I1202 11:31:32.534628   14602 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1202 11:31:32.534691   14602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-522394
	I1202 11:31:32.554569   14602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/addons-522394/id_rsa Username:docker}
	I1202 11:31:32.712615   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:32.713514   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:32.927114   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.807405531s)
	I1202 11:31:32.927161   14602 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-522394"
	I1202 11:31:32.928923   14602 out.go:177] * Verifying csi-hostpath-driver addon...
	I1202 11:31:32.931262   14602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1202 11:31:32.934553   14602 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 11:31:32.934577   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:33.213897   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:33.214155   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:33.434406   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:33.516975   14602 node_ready.go:53] node "addons-522394" has status "Ready":"False"
	I1202 11:31:33.713933   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:33.714263   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:33.934730   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:34.213814   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:34.214241   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:34.434553   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:34.713375   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:34.714016   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:34.934217   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:35.177959   14602 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.813722275s)
	I1202 11:31:35.178026   14602 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.643360252s)
	I1202 11:31:35.180020   14602 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1202 11:31:35.181470   14602 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1202 11:31:35.182961   14602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1202 11:31:35.182974   14602 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1202 11:31:35.199564   14602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1202 11:31:35.199588   14602 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1202 11:31:35.213804   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:35.214534   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:35.217166   14602 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 11:31:35.217183   14602 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1202 11:31:35.234220   14602 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 11:31:35.435108   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:35.555010   14602 addons.go:475] Verifying addon gcp-auth=true in "addons-522394"
	I1202 11:31:35.556578   14602 out.go:177] * Verifying gcp-auth addon...
	I1202 11:31:35.558569   14602 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1202 11:31:35.560775   14602 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1202 11:31:35.560791   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:35.713370   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:35.713675   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:35.934138   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:36.016509   14602 node_ready.go:53] node "addons-522394" has status "Ready":"False"
	I1202 11:31:36.061872   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:36.213899   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:36.214539   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:36.434716   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:36.562284   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:36.713076   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:36.714048   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:36.934604   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:37.061201   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:37.212641   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:37.213771   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:37.434435   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:37.562719   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:37.713468   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:37.713849   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:37.934235   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:38.016725   14602 node_ready.go:53] node "addons-522394" has status "Ready":"False"
	I1202 11:31:38.062238   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:38.213059   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:38.214075   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:38.434598   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:38.562175   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:38.713301   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:38.714179   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:38.934730   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:39.061782   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:39.213469   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:39.213896   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:39.434162   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:39.561697   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:39.713351   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:39.713769   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:39.934438   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:40.016762   14602 node_ready.go:53] node "addons-522394" has status "Ready":"False"
	I1202 11:31:40.062061   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:40.212600   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:40.213543   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:40.434859   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:40.561610   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:40.713367   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:40.714523   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:40.935230   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:41.061727   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:41.213169   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:41.214420   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:41.434918   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:41.561870   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:41.713508   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:41.714010   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:41.934126   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:42.061977   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:42.212554   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:42.214781   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:42.434019   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:42.516388   14602 node_ready.go:53] node "addons-522394" has status "Ready":"False"
	I1202 11:31:42.561802   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:42.713572   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:42.714182   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:42.934304   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:43.061995   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:43.212434   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:43.213774   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:43.434140   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:43.562644   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:43.713262   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:43.714581   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:44.008705   14602 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 11:31:44.008730   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:44.016837   14602 node_ready.go:49] node "addons-522394" has status "Ready":"True"
	I1202 11:31:44.016864   14602 node_ready.go:38] duration metric: took 17.503456307s for node "addons-522394" to be "Ready" ...
	I1202 11:31:44.016878   14602 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:31:44.041239   14602 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace to be "Ready" ...
	I1202 11:31:44.112563   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:44.215036   14602 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 11:31:44.215071   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:44.215789   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:44.437136   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:44.603481   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:44.715429   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:44.716398   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:44.935571   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:45.062158   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:45.214805   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:45.217998   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:45.504050   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:45.604227   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:45.714322   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:45.715787   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:45.937142   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:46.102974   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:46.104415   14602 pod_ready.go:103] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"False"
	I1202 11:31:46.214191   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:46.215578   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:46.437860   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:46.602154   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:46.716066   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:46.716762   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:46.937310   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:47.062195   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:47.213580   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:47.214485   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:47.435860   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:47.561253   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:47.714052   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:47.714745   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:47.935880   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:48.062054   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:48.214905   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:48.215271   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:48.436128   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:48.548022   14602 pod_ready.go:103] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"False"
	I1202 11:31:48.561794   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:48.714418   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:48.715273   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:48.935504   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:49.061481   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:49.214218   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:49.214831   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:49.436589   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:49.562125   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:49.713156   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:49.714498   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:49.936840   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:50.108903   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:50.213179   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:50.214208   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:50.436050   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:50.561903   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:50.713863   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:50.714188   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:50.936049   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:51.046915   14602 pod_ready.go:103] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"False"
	I1202 11:31:51.061681   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:51.213853   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:51.214659   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:51.436824   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:51.562283   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:51.713541   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:51.714444   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:51.936569   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:52.061743   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:52.214100   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:52.214726   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:52.436035   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:52.561723   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:52.713842   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:52.714685   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:52.935878   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:53.047351   14602 pod_ready.go:103] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"False"
	I1202 11:31:53.062598   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:53.213739   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:53.214686   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:53.436694   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:53.562896   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:53.714579   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:53.714810   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:53.934992   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:54.062153   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:54.213262   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:54.214134   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:54.435461   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:54.561658   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:54.714639   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:54.714967   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:54.936168   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:55.062577   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:55.215442   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:55.215975   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:55.436361   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:55.546811   14602 pod_ready.go:103] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"False"
	I1202 11:31:55.562552   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:55.713451   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:55.714338   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:56.020061   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:56.103072   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:56.213639   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:56.214768   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:56.436056   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:56.562683   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:56.713822   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:56.715072   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:56.936230   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:57.061811   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:57.213828   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:57.214563   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:57.435700   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:57.546921   14602 pod_ready.go:103] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"False"
	I1202 11:31:57.562389   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:57.713112   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:57.714229   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:57.936408   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:58.062040   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:58.214018   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:58.215358   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:58.435874   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:58.563100   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:58.715202   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:58.716010   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:58.935433   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:59.061816   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:59.213773   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:59.215048   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:59.434926   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:31:59.546955   14602 pod_ready.go:103] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"False"
	I1202 11:31:59.561122   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:31:59.713280   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:31:59.714776   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:31:59.936478   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:00.061346   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:00.213677   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:00.216994   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:00.435293   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:00.561341   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:00.713755   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:00.715082   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:00.935990   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:01.062489   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:01.213711   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:01.214825   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:01.435880   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:01.547735   14602 pod_ready.go:103] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:01.562195   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:01.713077   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:01.714091   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:01.934981   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:02.062075   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:02.214540   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:02.214914   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:02.435274   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:02.562648   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:02.713989   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:02.814825   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:02.935935   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:03.062334   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:03.213217   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:03.214411   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:03.435526   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:03.562115   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:03.713568   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:03.714592   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:03.936143   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:04.047803   14602 pod_ready.go:103] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:04.061771   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:04.214168   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:04.214771   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:04.435884   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:04.562369   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:04.713345   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:04.714259   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:04.935595   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:05.061759   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:05.213825   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:05.215396   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:05.434884   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:05.561366   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:05.713648   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:05.714685   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:05.936693   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:06.062127   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:06.213306   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:06.214041   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:06.435107   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:06.547169   14602 pod_ready.go:103] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:06.561596   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:06.713775   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:06.714562   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:06.935466   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:07.102201   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:07.213181   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:07.215700   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:07.435631   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:07.561955   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:07.714231   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:07.714832   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:07.936078   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:08.047329   14602 pod_ready.go:93] pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace has status "Ready":"True"
	I1202 11:32:08.047358   14602 pod_ready.go:82] duration metric: took 24.006089019s for pod "amd-gpu-device-plugin-czks8" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.047373   14602 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2cr8g" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.051655   14602 pod_ready.go:93] pod "coredns-7c65d6cfc9-2cr8g" in "kube-system" namespace has status "Ready":"True"
	I1202 11:32:08.051673   14602 pod_ready.go:82] duration metric: took 4.291677ms for pod "coredns-7c65d6cfc9-2cr8g" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.051691   14602 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-522394" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.055477   14602 pod_ready.go:93] pod "etcd-addons-522394" in "kube-system" namespace has status "Ready":"True"
	I1202 11:32:08.055498   14602 pod_ready.go:82] duration metric: took 3.800041ms for pod "etcd-addons-522394" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.055511   14602 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-522394" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.059136   14602 pod_ready.go:93] pod "kube-apiserver-addons-522394" in "kube-system" namespace has status "Ready":"True"
	I1202 11:32:08.059154   14602 pod_ready.go:82] duration metric: took 3.637196ms for pod "kube-apiserver-addons-522394" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.059163   14602 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-522394" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.060836   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:08.063086   14602 pod_ready.go:93] pod "kube-controller-manager-addons-522394" in "kube-system" namespace has status "Ready":"True"
	I1202 11:32:08.063105   14602 pod_ready.go:82] duration metric: took 3.935451ms for pod "kube-controller-manager-addons-522394" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.063118   14602 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7vj6f" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.213091   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:08.214184   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:08.435547   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:08.445381   14602 pod_ready.go:93] pod "kube-proxy-7vj6f" in "kube-system" namespace has status "Ready":"True"
	I1202 11:32:08.445405   14602 pod_ready.go:82] duration metric: took 382.279224ms for pod "kube-proxy-7vj6f" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.445415   14602 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-522394" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.561529   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:08.713888   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:08.714544   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:08.845735   14602 pod_ready.go:93] pod "kube-scheduler-addons-522394" in "kube-system" namespace has status "Ready":"True"
	I1202 11:32:08.845764   14602 pod_ready.go:82] duration metric: took 400.341951ms for pod "kube-scheduler-addons-522394" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.845775   14602 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace to be "Ready" ...
	I1202 11:32:08.935615   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:09.062216   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:09.212816   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:09.213973   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:09.435700   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:09.562262   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:09.715359   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:09.716284   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:09.936041   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:10.103453   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:10.214031   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:10.216290   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:10.505027   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:10.603838   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:10.716891   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:10.718010   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:10.913902   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:11.005742   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:11.104418   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:11.216927   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:11.217937   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:11.507449   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:11.603957   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:11.717527   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:11.719040   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:11.935482   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:12.061591   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:12.213518   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:12.214745   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:12.435255   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:12.562713   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:12.713623   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:12.715758   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:12.936213   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:13.103343   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:13.214492   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:13.214847   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:13.351329   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:13.435799   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:13.562048   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:13.714604   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:13.715344   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:13.936029   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:14.062925   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:14.215328   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:14.215460   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:14.436073   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:14.561828   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:14.714083   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:14.714644   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:14.936639   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:15.062120   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:15.214771   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:15.215320   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:15.351955   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:15.435757   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:15.562113   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:15.715368   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:15.715413   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:15.936239   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:16.062660   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:16.213741   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:16.214542   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:16.435573   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:16.561801   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:16.713926   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:16.714962   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:16.935960   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:17.061647   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:17.213814   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:17.215153   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:17.436429   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:17.561946   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:17.713677   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:17.715137   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:17.851694   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:17.935888   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:18.062724   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:18.213647   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:18.215130   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:18.438875   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:18.561896   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:18.715047   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:18.715500   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:18.936133   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:19.103642   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:19.214302   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:19.215284   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:19.509538   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:19.601673   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:19.714257   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:19.714884   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:19.902577   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:19.936033   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:20.062136   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:20.213710   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:20.215050   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:20.435781   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:20.561873   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:20.714139   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:20.714448   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:20.936352   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:21.062456   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:21.259425   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:21.259928   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:21.437320   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:21.562374   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:21.713729   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:21.714297   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:21.935844   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:22.063340   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:22.213397   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:22.214638   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:22.352099   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:22.435788   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:22.562089   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:22.714097   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:22.714555   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:22.936673   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:23.062499   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:23.213814   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:23.214508   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:23.436650   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:23.562251   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:23.713346   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 11:32:23.714315   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:23.935858   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:24.062606   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:24.213670   14602 kapi.go:107] duration metric: took 53.003885246s to wait for kubernetes.io/minikube-addons=registry ...
	I1202 11:32:24.214558   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:24.435190   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:24.562531   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:24.715304   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:24.907605   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:25.005506   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:25.103671   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:25.215923   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:25.436660   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:25.562473   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:25.714853   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:25.935830   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:26.061963   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:26.215469   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:26.436104   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:26.561879   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:26.715579   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:26.935666   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:27.062520   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:27.214609   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:27.350661   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:27.435745   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:27.562302   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:27.714158   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:27.935962   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:28.101748   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:28.215334   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:28.435069   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:28.562392   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 11:32:28.714806   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:28.936078   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:29.103502   14602 kapi.go:107] duration metric: took 53.544924052s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1202 11:32:29.105287   14602 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-522394 cluster.
	I1202 11:32:29.106847   14602 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1202 11:32:29.109263   14602 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1202 11:32:29.215008   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:29.351421   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:29.436468   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:29.715615   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:29.935807   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:30.214922   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:30.436481   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:30.715287   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:31.006638   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:31.215631   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:31.406529   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:31.505951   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:31.719179   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:31.937642   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:32.215434   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:32.436006   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:32.715459   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:32.935340   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:33.215589   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:33.435771   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:33.715028   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:33.852153   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:33.935695   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:34.216586   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:34.435895   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:34.714446   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:34.935909   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:35.214936   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:35.436161   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:35.715186   14602 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 11:32:35.902746   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:35.936938   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:36.216572   14602 kapi.go:107] duration metric: took 1m5.005641999s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1202 11:32:36.436261   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:36.935920   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:37.435329   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:37.935481   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:38.351060   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:38.436084   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:38.936743   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:39.436403   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:39.935874   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:40.351454   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:40.437019   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:40.935677   14602 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 11:32:41.439488   14602 kapi.go:107] duration metric: took 1m8.508222368s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1202 11:32:41.441284   14602 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, ingress-dns, amd-gpu-device-plugin, inspektor-gadget, nvidia-device-plugin, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1202 11:32:41.443272   14602 addons.go:510] duration metric: took 1m16.473059725s for enable addons: enabled=[storage-provisioner cloud-spanner ingress-dns amd-gpu-device-plugin inspektor-gadget nvidia-device-plugin metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1202 11:32:42.351544   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:44.851785   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:47.351839   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:49.850755   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:51.851399   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:54.350774   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:56.351373   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:32:58.351619   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:33:00.851493   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:33:02.851739   14602 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"False"
	I1202 11:33:04.352033   14602 pod_ready.go:93] pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace has status "Ready":"True"
	I1202 11:33:04.352062   14602 pod_ready.go:82] duration metric: took 55.506278545s for pod "metrics-server-84c5f94fbc-cmfs5" in "kube-system" namespace to be "Ready" ...
	I1202 11:33:04.352074   14602 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-kwcbg" in "kube-system" namespace to be "Ready" ...
	I1202 11:33:04.356463   14602 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-kwcbg" in "kube-system" namespace has status "Ready":"True"
	I1202 11:33:04.356487   14602 pod_ready.go:82] duration metric: took 4.405567ms for pod "nvidia-device-plugin-daemonset-kwcbg" in "kube-system" namespace to be "Ready" ...
	I1202 11:33:04.356512   14602 pod_ready.go:39] duration metric: took 1m20.339620891s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:33:04.356534   14602 api_server.go:52] waiting for apiserver process to appear ...
	I1202 11:33:04.356573   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 11:33:04.356629   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 11:33:04.390119   14602 cri.go:89] found id: "def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc"
	I1202 11:33:04.390141   14602 cri.go:89] found id: ""
	I1202 11:33:04.390151   14602 logs.go:282] 1 containers: [def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc]
	I1202 11:33:04.390207   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:04.393410   14602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 11:33:04.393472   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 11:33:04.427097   14602 cri.go:89] found id: "ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5"
	I1202 11:33:04.427126   14602 cri.go:89] found id: ""
	I1202 11:33:04.427136   14602 logs.go:282] 1 containers: [ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5]
	I1202 11:33:04.427182   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:04.430466   14602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 11:33:04.430528   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 11:33:04.462926   14602 cri.go:89] found id: "9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad"
	I1202 11:33:04.462951   14602 cri.go:89] found id: ""
	I1202 11:33:04.462959   14602 logs.go:282] 1 containers: [9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad]
	I1202 11:33:04.462997   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:04.466248   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 11:33:04.466300   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 11:33:04.499482   14602 cri.go:89] found id: "0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0"
	I1202 11:33:04.499503   14602 cri.go:89] found id: ""
	I1202 11:33:04.499514   14602 logs.go:282] 1 containers: [0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0]
	I1202 11:33:04.499570   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:04.503014   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 11:33:04.503087   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 11:33:04.535453   14602 cri.go:89] found id: "407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402"
	I1202 11:33:04.535476   14602 cri.go:89] found id: ""
	I1202 11:33:04.535483   14602 logs.go:282] 1 containers: [407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402]
	I1202 11:33:04.535521   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:04.538678   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 11:33:04.538730   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 11:33:04.571654   14602 cri.go:89] found id: "a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611"
	I1202 11:33:04.571680   14602 cri.go:89] found id: ""
	I1202 11:33:04.571688   14602 logs.go:282] 1 containers: [a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611]
	I1202 11:33:04.571728   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:04.575208   14602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 11:33:04.575275   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 11:33:04.608516   14602 cri.go:89] found id: "4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c"
	I1202 11:33:04.608541   14602 cri.go:89] found id: ""
	I1202 11:33:04.608548   14602 logs.go:282] 1 containers: [4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c]
	I1202 11:33:04.608598   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:04.611910   14602 logs.go:123] Gathering logs for describe nodes ...
	I1202 11:33:04.611937   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 11:33:04.707312   14602 logs.go:123] Gathering logs for coredns [9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad] ...
	I1202 11:33:04.707343   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad"
	I1202 11:33:04.741712   14602 logs.go:123] Gathering logs for kube-proxy [407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402] ...
	I1202 11:33:04.741741   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402"
	I1202 11:33:04.773702   14602 logs.go:123] Gathering logs for kube-controller-manager [a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611] ...
	I1202 11:33:04.773727   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611"
	I1202 11:33:04.834157   14602 logs.go:123] Gathering logs for kindnet [4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c] ...
	I1202 11:33:04.834193   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c"
	I1202 11:33:04.868591   14602 logs.go:123] Gathering logs for kubelet ...
	I1202 11:33:04.868619   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 11:33:04.957351   14602 logs.go:123] Gathering logs for dmesg ...
	I1202 11:33:04.957398   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 11:33:04.969803   14602 logs.go:123] Gathering logs for kube-scheduler [0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0] ...
	I1202 11:33:04.969836   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0"
	I1202 11:33:05.009507   14602 logs.go:123] Gathering logs for CRI-O ...
	I1202 11:33:05.009545   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 11:33:05.082024   14602 logs.go:123] Gathering logs for container status ...
	I1202 11:33:05.082059   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 11:33:05.122143   14602 logs.go:123] Gathering logs for kube-apiserver [def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc] ...
	I1202 11:33:05.122169   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc"
	I1202 11:33:05.164322   14602 logs.go:123] Gathering logs for etcd [ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5] ...
	I1202 11:33:05.164353   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5"
	I1202 11:33:07.714921   14602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 11:33:07.729411   14602 api_server.go:72] duration metric: took 1m42.759234451s to wait for apiserver process to appear ...
	I1202 11:33:07.729435   14602 api_server.go:88] waiting for apiserver healthz status ...
	I1202 11:33:07.729476   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 11:33:07.729527   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 11:33:07.762364   14602 cri.go:89] found id: "def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc"
	I1202 11:33:07.762384   14602 cri.go:89] found id: ""
	I1202 11:33:07.762394   14602 logs.go:282] 1 containers: [def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc]
	I1202 11:33:07.762460   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:07.765807   14602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 11:33:07.765867   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 11:33:07.798732   14602 cri.go:89] found id: "ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5"
	I1202 11:33:07.798754   14602 cri.go:89] found id: ""
	I1202 11:33:07.798762   14602 logs.go:282] 1 containers: [ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5]
	I1202 11:33:07.798814   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:07.802528   14602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 11:33:07.802597   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 11:33:07.835364   14602 cri.go:89] found id: "9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad"
	I1202 11:33:07.835382   14602 cri.go:89] found id: ""
	I1202 11:33:07.835390   14602 logs.go:282] 1 containers: [9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad]
	I1202 11:33:07.835443   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:07.838655   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 11:33:07.838718   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 11:33:07.871286   14602 cri.go:89] found id: "0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0"
	I1202 11:33:07.871306   14602 cri.go:89] found id: ""
	I1202 11:33:07.871314   14602 logs.go:282] 1 containers: [0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0]
	I1202 11:33:07.871359   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:07.874700   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 11:33:07.874760   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 11:33:07.908903   14602 cri.go:89] found id: "407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402"
	I1202 11:33:07.908930   14602 cri.go:89] found id: ""
	I1202 11:33:07.908940   14602 logs.go:282] 1 containers: [407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402]
	I1202 11:33:07.908982   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:07.912406   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 11:33:07.912470   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 11:33:07.945015   14602 cri.go:89] found id: "a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611"
	I1202 11:33:07.945034   14602 cri.go:89] found id: ""
	I1202 11:33:07.945042   14602 logs.go:282] 1 containers: [a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611]
	I1202 11:33:07.945094   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:07.948378   14602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 11:33:07.948433   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 11:33:07.981128   14602 cri.go:89] found id: "4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c"
	I1202 11:33:07.981153   14602 cri.go:89] found id: ""
	I1202 11:33:07.981161   14602 logs.go:282] 1 containers: [4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c]
	I1202 11:33:07.981206   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:07.984527   14602 logs.go:123] Gathering logs for kube-scheduler [0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0] ...
	I1202 11:33:07.984552   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0"
	I1202 11:33:08.023077   14602 logs.go:123] Gathering logs for kube-proxy [407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402] ...
	I1202 11:33:08.023111   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402"
	I1202 11:33:08.055977   14602 logs.go:123] Gathering logs for kindnet [4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c] ...
	I1202 11:33:08.056003   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c"
	I1202 11:33:08.088171   14602 logs.go:123] Gathering logs for CRI-O ...
	I1202 11:33:08.088194   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 11:33:08.165244   14602 logs.go:123] Gathering logs for etcd [ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5] ...
	I1202 11:33:08.165279   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5"
	I1202 11:33:08.215981   14602 logs.go:123] Gathering logs for coredns [9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad] ...
	I1202 11:33:08.216014   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad"
	I1202 11:33:08.250986   14602 logs.go:123] Gathering logs for describe nodes ...
	I1202 11:33:08.251018   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 11:33:08.348309   14602 logs.go:123] Gathering logs for kube-apiserver [def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc] ...
	I1202 11:33:08.348340   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc"
	I1202 11:33:08.392047   14602 logs.go:123] Gathering logs for kube-controller-manager [a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611] ...
	I1202 11:33:08.392080   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611"
	I1202 11:33:08.447661   14602 logs.go:123] Gathering logs for container status ...
	I1202 11:33:08.447697   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 11:33:08.488878   14602 logs.go:123] Gathering logs for kubelet ...
	I1202 11:33:08.488907   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 11:33:08.570123   14602 logs.go:123] Gathering logs for dmesg ...
	I1202 11:33:08.570159   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 11:33:11.083340   14602 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:33:11.087097   14602 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1202 11:33:11.088008   14602 api_server.go:141] control plane version: v1.31.2
	I1202 11:33:11.088030   14602 api_server.go:131] duration metric: took 3.358589227s to wait for apiserver health ...
	I1202 11:33:11.088039   14602 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 11:33:11.088059   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 11:33:11.088112   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 11:33:11.122112   14602 cri.go:89] found id: "def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc"
	I1202 11:33:11.122131   14602 cri.go:89] found id: ""
	I1202 11:33:11.122139   14602 logs.go:282] 1 containers: [def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc]
	I1202 11:33:11.122178   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:11.125452   14602 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 11:33:11.125501   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 11:33:11.158543   14602 cri.go:89] found id: "ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5"
	I1202 11:33:11.158565   14602 cri.go:89] found id: ""
	I1202 11:33:11.158573   14602 logs.go:282] 1 containers: [ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5]
	I1202 11:33:11.158616   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:11.161945   14602 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 11:33:11.161995   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 11:33:11.194572   14602 cri.go:89] found id: "9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad"
	I1202 11:33:11.194598   14602 cri.go:89] found id: ""
	I1202 11:33:11.194607   14602 logs.go:282] 1 containers: [9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad]
	I1202 11:33:11.194652   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:11.198084   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 11:33:11.198135   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 11:33:11.231901   14602 cri.go:89] found id: "0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0"
	I1202 11:33:11.231924   14602 cri.go:89] found id: ""
	I1202 11:33:11.231931   14602 logs.go:282] 1 containers: [0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0]
	I1202 11:33:11.231972   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:11.235216   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 11:33:11.235266   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 11:33:11.268737   14602 cri.go:89] found id: "407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402"
	I1202 11:33:11.268757   14602 cri.go:89] found id: ""
	I1202 11:33:11.268765   14602 logs.go:282] 1 containers: [407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402]
	I1202 11:33:11.268805   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:11.272029   14602 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 11:33:11.272099   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 11:33:11.304430   14602 cri.go:89] found id: "a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611"
	I1202 11:33:11.304458   14602 cri.go:89] found id: ""
	I1202 11:33:11.304469   14602 logs.go:282] 1 containers: [a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611]
	I1202 11:33:11.304512   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:11.307791   14602 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 11:33:11.307845   14602 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 11:33:11.341205   14602 cri.go:89] found id: "4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c"
	I1202 11:33:11.341246   14602 cri.go:89] found id: ""
	I1202 11:33:11.341257   14602 logs.go:282] 1 containers: [4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c]
	I1202 11:33:11.341315   14602 ssh_runner.go:195] Run: which crictl
	I1202 11:33:11.344900   14602 logs.go:123] Gathering logs for dmesg ...
	I1202 11:33:11.344931   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 11:33:11.356614   14602 logs.go:123] Gathering logs for describe nodes ...
	I1202 11:33:11.356637   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 11:33:11.544336   14602 logs.go:123] Gathering logs for kube-scheduler [0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0] ...
	I1202 11:33:11.544362   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0"
	I1202 11:33:11.636199   14602 logs.go:123] Gathering logs for kube-proxy [407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402] ...
	I1202 11:33:11.636234   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402"
	I1202 11:33:11.669482   14602 logs.go:123] Gathering logs for kube-controller-manager [a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611] ...
	I1202 11:33:11.669513   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611"
	I1202 11:33:11.724795   14602 logs.go:123] Gathering logs for container status ...
	I1202 11:33:11.724827   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 11:33:11.766225   14602 logs.go:123] Gathering logs for kubelet ...
	I1202 11:33:11.766255   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 11:33:11.853567   14602 logs.go:123] Gathering logs for kube-apiserver [def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc] ...
	I1202 11:33:11.853609   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc"
	I1202 11:33:11.897592   14602 logs.go:123] Gathering logs for etcd [ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5] ...
	I1202 11:33:11.897630   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5"
	I1202 11:33:11.946691   14602 logs.go:123] Gathering logs for coredns [9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad] ...
	I1202 11:33:11.946726   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad"
	I1202 11:33:11.981385   14602 logs.go:123] Gathering logs for kindnet [4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c] ...
	I1202 11:33:11.981416   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c"
	I1202 11:33:12.014723   14602 logs.go:123] Gathering logs for CRI-O ...
	I1202 11:33:12.014753   14602 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 11:33:14.594602   14602 system_pods.go:59] 19 kube-system pods found
	I1202 11:33:14.594647   14602 system_pods.go:61] "amd-gpu-device-plugin-czks8" [28b7071f-be42-4af7-bcb6-44dcf77d9d72] Running
	I1202 11:33:14.594658   14602 system_pods.go:61] "coredns-7c65d6cfc9-2cr8g" [21278506-daa2-47ba-87a6-bd0a841d3f2f] Running
	I1202 11:33:14.594663   14602 system_pods.go:61] "csi-hostpath-attacher-0" [4c389e45-1a9d-4eee-90be-e9fac8b383e0] Running
	I1202 11:33:14.594668   14602 system_pods.go:61] "csi-hostpath-resizer-0" [5a26dca9-12e6-468f-9bb4-3e1ab16070e6] Running
	I1202 11:33:14.594673   14602 system_pods.go:61] "csi-hostpathplugin-cwsfz" [38d189a6-30cc-4de4-9554-b7b17ccabac5] Running
	I1202 11:33:14.594679   14602 system_pods.go:61] "etcd-addons-522394" [5900a6e2-e94e-45e3-8761-57ae9adb4852] Running
	I1202 11:33:14.594685   14602 system_pods.go:61] "kindnet-p2kn5" [f01c6cb1-1b80-489f-8f17-8cbd5b23bbad] Running
	I1202 11:33:14.594692   14602 system_pods.go:61] "kube-apiserver-addons-522394" [567d2d63-09b9-47d3-b623-c0841253d8a2] Running
	I1202 11:33:14.594697   14602 system_pods.go:61] "kube-controller-manager-addons-522394" [346cc3c6-56e4-41cc-bdaf-b83bc67642fa] Running
	I1202 11:33:14.594703   14602 system_pods.go:61] "kube-ingress-dns-minikube" [3438f8b3-3a02-44ca-af0e-0ae8f347d465] Running
	I1202 11:33:14.594711   14602 system_pods.go:61] "kube-proxy-7vj6f" [31c251d6-04a9-4ccc-858e-f070357e572a] Running
	I1202 11:33:14.594717   14602 system_pods.go:61] "kube-scheduler-addons-522394" [3c312aab-7760-497e-a3f3-6e527a60576f] Running
	I1202 11:33:14.594723   14602 system_pods.go:61] "metrics-server-84c5f94fbc-cmfs5" [d201f129-cdd9-474b-90ff-b22982035951] Running
	I1202 11:33:14.594730   14602 system_pods.go:61] "nvidia-device-plugin-daemonset-kwcbg" [e45feff4-5960-425e-9363-207b937d3696] Running
	I1202 11:33:14.594739   14602 system_pods.go:61] "registry-66c9cd494c-vdszr" [2c730b2c-d2ab-48fe-8268-0064ccf42ac1] Running
	I1202 11:33:14.594745   14602 system_pods.go:61] "registry-proxy-9xwj9" [9c2a618e-304b-4aef-b3a1-3daca132483a] Running
	I1202 11:33:14.594752   14602 system_pods.go:61] "snapshot-controller-56fcc65765-c8r8s" [f73951d4-ec85-4a1d-abac-ba3b7a4431e5] Running
	I1202 11:33:14.594758   14602 system_pods.go:61] "snapshot-controller-56fcc65765-dxlg6" [f401a09e-b82e-4309-afdf-e1f62db25a08] Running
	I1202 11:33:14.594767   14602 system_pods.go:61] "storage-provisioner" [98cd8826-798c-4d91-8c3f-77c5470e5fad] Running
	I1202 11:33:14.594775   14602 system_pods.go:74] duration metric: took 3.50672999s to wait for pod list to return data ...
	I1202 11:33:14.594788   14602 default_sa.go:34] waiting for default service account to be created ...
	I1202 11:33:14.597006   14602 default_sa.go:45] found service account: "default"
	I1202 11:33:14.597024   14602 default_sa.go:55] duration metric: took 2.229729ms for default service account to be created ...
	I1202 11:33:14.597032   14602 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 11:33:14.605392   14602 system_pods.go:86] 19 kube-system pods found
	I1202 11:33:14.605418   14602 system_pods.go:89] "amd-gpu-device-plugin-czks8" [28b7071f-be42-4af7-bcb6-44dcf77d9d72] Running
	I1202 11:33:14.605426   14602 system_pods.go:89] "coredns-7c65d6cfc9-2cr8g" [21278506-daa2-47ba-87a6-bd0a841d3f2f] Running
	I1202 11:33:14.605430   14602 system_pods.go:89] "csi-hostpath-attacher-0" [4c389e45-1a9d-4eee-90be-e9fac8b383e0] Running
	I1202 11:33:14.605434   14602 system_pods.go:89] "csi-hostpath-resizer-0" [5a26dca9-12e6-468f-9bb4-3e1ab16070e6] Running
	I1202 11:33:14.605439   14602 system_pods.go:89] "csi-hostpathplugin-cwsfz" [38d189a6-30cc-4de4-9554-b7b17ccabac5] Running
	I1202 11:33:14.605443   14602 system_pods.go:89] "etcd-addons-522394" [5900a6e2-e94e-45e3-8761-57ae9adb4852] Running
	I1202 11:33:14.605447   14602 system_pods.go:89] "kindnet-p2kn5" [f01c6cb1-1b80-489f-8f17-8cbd5b23bbad] Running
	I1202 11:33:14.605451   14602 system_pods.go:89] "kube-apiserver-addons-522394" [567d2d63-09b9-47d3-b623-c0841253d8a2] Running
	I1202 11:33:14.605455   14602 system_pods.go:89] "kube-controller-manager-addons-522394" [346cc3c6-56e4-41cc-bdaf-b83bc67642fa] Running
	I1202 11:33:14.605459   14602 system_pods.go:89] "kube-ingress-dns-minikube" [3438f8b3-3a02-44ca-af0e-0ae8f347d465] Running
	I1202 11:33:14.605466   14602 system_pods.go:89] "kube-proxy-7vj6f" [31c251d6-04a9-4ccc-858e-f070357e572a] Running
	I1202 11:33:14.605469   14602 system_pods.go:89] "kube-scheduler-addons-522394" [3c312aab-7760-497e-a3f3-6e527a60576f] Running
	I1202 11:33:14.605476   14602 system_pods.go:89] "metrics-server-84c5f94fbc-cmfs5" [d201f129-cdd9-474b-90ff-b22982035951] Running
	I1202 11:33:14.605481   14602 system_pods.go:89] "nvidia-device-plugin-daemonset-kwcbg" [e45feff4-5960-425e-9363-207b937d3696] Running
	I1202 11:33:14.605487   14602 system_pods.go:89] "registry-66c9cd494c-vdszr" [2c730b2c-d2ab-48fe-8268-0064ccf42ac1] Running
	I1202 11:33:14.605491   14602 system_pods.go:89] "registry-proxy-9xwj9" [9c2a618e-304b-4aef-b3a1-3daca132483a] Running
	I1202 11:33:14.605494   14602 system_pods.go:89] "snapshot-controller-56fcc65765-c8r8s" [f73951d4-ec85-4a1d-abac-ba3b7a4431e5] Running
	I1202 11:33:14.605497   14602 system_pods.go:89] "snapshot-controller-56fcc65765-dxlg6" [f401a09e-b82e-4309-afdf-e1f62db25a08] Running
	I1202 11:33:14.605500   14602 system_pods.go:89] "storage-provisioner" [98cd8826-798c-4d91-8c3f-77c5470e5fad] Running
	I1202 11:33:14.605509   14602 system_pods.go:126] duration metric: took 8.472356ms to wait for k8s-apps to be running ...
	I1202 11:33:14.605518   14602 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 11:33:14.605557   14602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:33:14.616788   14602 system_svc.go:56] duration metric: took 11.262405ms WaitForService to wait for kubelet
	I1202 11:33:14.616812   14602 kubeadm.go:582] duration metric: took 1m49.646640687s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:33:14.616832   14602 node_conditions.go:102] verifying NodePressure condition ...
	I1202 11:33:14.619796   14602 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 11:33:14.619821   14602 node_conditions.go:123] node cpu capacity is 8
	I1202 11:33:14.619836   14602 node_conditions.go:105] duration metric: took 2.99908ms to run NodePressure ...
	I1202 11:33:14.619850   14602 start.go:241] waiting for startup goroutines ...
	I1202 11:33:14.619859   14602 start.go:246] waiting for cluster config update ...
	I1202 11:33:14.619880   14602 start.go:255] writing updated cluster config ...
	I1202 11:33:14.620149   14602 ssh_runner.go:195] Run: rm -f paused
	I1202 11:33:14.667932   14602 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 11:33:14.670094   14602 out.go:177] * Done! kubectl is now configured to use "addons-522394" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 11:36:10 addons-522394 crio[1032]: time="2024-12-02 11:36:10.419520116Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-5f85ff4588-zn75n Namespace:ingress-nginx ID:1e672cb075bccecf0a6b4b08dfe09c970b0f2c881393871d1497a1a28addbe73 UID:9a9d4c80-bbac-4ba7-a58d-901836ab0829 NetNS:/var/run/netns/4b874a22-dd21-46ec-b46a-572549988408 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 02 11:36:10 addons-522394 crio[1032]: time="2024-12-02 11:36:10.419629493Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-5f85ff4588-zn75n from CNI network \"kindnet\" (type=ptp)"
	Dec 02 11:36:10 addons-522394 crio[1032]: time="2024-12-02 11:36:10.457717665Z" level=info msg="Stopped pod sandbox: 1e672cb075bccecf0a6b4b08dfe09c970b0f2c881393871d1497a1a28addbe73" id=3286ef5e-9414-4778-9bc6-a5aa6a925e71 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 11:36:10 addons-522394 crio[1032]: time="2024-12-02 11:36:10.749360074Z" level=info msg="Removing container: 8a910d74d36d8e2c6eba88da5c0fcf72b2e5f6db97c74c7b5e533deaa65c3f3c" id=d2d06e54-f044-437a-9ee8-9cdeee5cd49c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 11:36:10 addons-522394 crio[1032]: time="2024-12-02 11:36:10.763325961Z" level=info msg="Removed container 8a910d74d36d8e2c6eba88da5c0fcf72b2e5f6db97c74c7b5e533deaa65c3f3c: ingress-nginx/ingress-nginx-controller-5f85ff4588-zn75n/controller" id=d2d06e54-f044-437a-9ee8-9cdeee5cd49c name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 11:36:19 addons-522394 crio[1032]: time="2024-12-02 11:36:19.629921726Z" level=info msg="Removing container: d00d1bfa9ce35d4aef143994090abd8241f91ae83b0fda0f3adb8568411fd407" id=95d3480e-f9a7-4b87-ab84-a41fcef2d067 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 11:36:19 addons-522394 crio[1032]: time="2024-12-02 11:36:19.642214636Z" level=info msg="Removed container d00d1bfa9ce35d4aef143994090abd8241f91ae83b0fda0f3adb8568411fd407: ingress-nginx/ingress-nginx-admission-patch-j7fb2/patch" id=95d3480e-f9a7-4b87-ab84-a41fcef2d067 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 11:36:19 addons-522394 crio[1032]: time="2024-12-02 11:36:19.643400147Z" level=info msg="Removing container: 963255c6f0cb5054d30f7efb8ffe463c90f44e630d25adea922463904d7f9053" id=917fc639-ab2b-407a-b51c-5f69155493ea name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 11:36:19 addons-522394 crio[1032]: time="2024-12-02 11:36:19.656966744Z" level=info msg="Removed container 963255c6f0cb5054d30f7efb8ffe463c90f44e630d25adea922463904d7f9053: ingress-nginx/ingress-nginx-admission-create-jrfdn/create" id=917fc639-ab2b-407a-b51c-5f69155493ea name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 02 11:36:19 addons-522394 crio[1032]: time="2024-12-02 11:36:19.658272000Z" level=info msg="Stopping pod sandbox: a5ed946ce277b57410f6d69c581babc728f585a907609975cb5bc94a5e15c422" id=1cbeb8dd-0cd0-488a-b521-da3db6218867 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 11:36:19 addons-522394 crio[1032]: time="2024-12-02 11:36:19.658312985Z" level=info msg="Stopped pod sandbox (already stopped): a5ed946ce277b57410f6d69c581babc728f585a907609975cb5bc94a5e15c422" id=1cbeb8dd-0cd0-488a-b521-da3db6218867 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 11:36:19 addons-522394 crio[1032]: time="2024-12-02 11:36:19.658562293Z" level=info msg="Removing pod sandbox: a5ed946ce277b57410f6d69c581babc728f585a907609975cb5bc94a5e15c422" id=6f66972d-b8b4-4973-bf14-b6d03c3cf353 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 11:36:19 addons-522394 crio[1032]: time="2024-12-02 11:36:19.664258578Z" level=info msg="Removed pod sandbox: a5ed946ce277b57410f6d69c581babc728f585a907609975cb5bc94a5e15c422" id=6f66972d-b8b4-4973-bf14-b6d03c3cf353 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 11:36:19 addons-522394 crio[1032]: time="2024-12-02 11:36:19.664644167Z" level=info msg="Stopping pod sandbox: 1e672cb075bccecf0a6b4b08dfe09c970b0f2c881393871d1497a1a28addbe73" id=9a82c466-6a50-4969-86f7-8d20ce9d58d7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 11:36:19 addons-522394 crio[1032]: time="2024-12-02 11:36:19.664669937Z" level=info msg="Stopped pod sandbox (already stopped): 1e672cb075bccecf0a6b4b08dfe09c970b0f2c881393871d1497a1a28addbe73" id=9a82c466-6a50-4969-86f7-8d20ce9d58d7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 11:36:19 addons-522394 crio[1032]: time="2024-12-02 11:36:19.664895010Z" level=info msg="Removing pod sandbox: 1e672cb075bccecf0a6b4b08dfe09c970b0f2c881393871d1497a1a28addbe73" id=b1dbbd32-ccda-4bf4-9bdf-a819b156dabc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 11:36:19 addons-522394 crio[1032]: time="2024-12-02 11:36:19.670414191Z" level=info msg="Removed pod sandbox: 1e672cb075bccecf0a6b4b08dfe09c970b0f2c881393871d1497a1a28addbe73" id=b1dbbd32-ccda-4bf4-9bdf-a819b156dabc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 11:36:19 addons-522394 crio[1032]: time="2024-12-02 11:36:19.670766839Z" level=info msg="Stopping pod sandbox: ea2d07f4609af2a4b5b7767fe5815233818dd98e6c204e595430afc06e6a8a30" id=2e57e1e2-ee27-4564-9912-9910f0ed48af name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 11:36:19 addons-522394 crio[1032]: time="2024-12-02 11:36:19.670790952Z" level=info msg="Stopped pod sandbox (already stopped): ea2d07f4609af2a4b5b7767fe5815233818dd98e6c204e595430afc06e6a8a30" id=2e57e1e2-ee27-4564-9912-9910f0ed48af name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 11:36:19 addons-522394 crio[1032]: time="2024-12-02 11:36:19.671052368Z" level=info msg="Removing pod sandbox: ea2d07f4609af2a4b5b7767fe5815233818dd98e6c204e595430afc06e6a8a30" id=d1e557b0-a56f-4084-9221-fb9515a14ef3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 11:36:19 addons-522394 crio[1032]: time="2024-12-02 11:36:19.678497457Z" level=info msg="Removed pod sandbox: ea2d07f4609af2a4b5b7767fe5815233818dd98e6c204e595430afc06e6a8a30" id=d1e557b0-a56f-4084-9221-fb9515a14ef3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 11:36:19 addons-522394 crio[1032]: time="2024-12-02 11:36:19.678928105Z" level=info msg="Stopping pod sandbox: eab7ce2752a815f4ca6dd0de577da6906dbda90c72b3d2f8ff9276ea5dedae06" id=f76328bf-fe40-45fd-9d33-6a06266f0e2d name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 11:36:19 addons-522394 crio[1032]: time="2024-12-02 11:36:19.678969751Z" level=info msg="Stopped pod sandbox (already stopped): eab7ce2752a815f4ca6dd0de577da6906dbda90c72b3d2f8ff9276ea5dedae06" id=f76328bf-fe40-45fd-9d33-6a06266f0e2d name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 02 11:36:19 addons-522394 crio[1032]: time="2024-12-02 11:36:19.679310804Z" level=info msg="Removing pod sandbox: eab7ce2752a815f4ca6dd0de577da6906dbda90c72b3d2f8ff9276ea5dedae06" id=9d2527fb-7538-405c-b461-d38a0567f003 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 02 11:36:19 addons-522394 crio[1032]: time="2024-12-02 11:36:19.685762479Z" level=info msg="Removed pod sandbox: eab7ce2752a815f4ca6dd0de577da6906dbda90c72b3d2f8ff9276ea5dedae06" id=9d2527fb-7538-405c-b461-d38a0567f003 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ffe01931f48ab       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   c5c0ca3092f2c       hello-world-app-55bf9c44b4-src62
	56aa088d1862e       docker.io/library/nginx@sha256:5acf10cd305853dc2271e3c818d342f3aeb3688b1256ab8f035fda04b91ed303                         5 minutes ago       Running             nginx                     0                   6b1691e2f24d1       nginx
	2db6ab878cea3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   62a465a3bfe95       busybox
	b7cbf62e719cd       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        6 minutes ago       Running             local-path-provisioner    0                   c174ae3606242       local-path-provisioner-86d989889c-qf4zs
	7507379c3f1a3       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   6 minutes ago       Running             metrics-server            0                   04d283bcf6fe3       metrics-server-84c5f94fbc-cmfs5
	2cf66f4197da9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   75d2768cabdaf       storage-provisioner
	9bbf1ed828b35       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        7 minutes ago       Running             coredns                   0                   d45379ea69265       coredns-7c65d6cfc9-2cr8g
	4702bd641c5b0       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16                      7 minutes ago       Running             kindnet-cni               0                   3ed71d144052d       kindnet-p2kn5
	407fff8704469       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        7 minutes ago       Running             kube-proxy                0                   1180b8f94fe78       kube-proxy-7vj6f
	a5b86f11cb862       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        7 minutes ago       Running             kube-controller-manager   0                   debccfba29529       kube-controller-manager-addons-522394
	def246b91f2b5       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        7 minutes ago       Running             kube-apiserver            0                   d29bab926b161       kube-apiserver-addons-522394
	ee5fef32ba1e2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago       Running             etcd                      0                   4417b3c3533a8       etcd-addons-522394
	0251b2cec71bb       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        7 minutes ago       Running             kube-scheduler            0                   72b2dd0085cf8       kube-scheduler-addons-522394
	
	
	==> coredns [9bbf1ed828b352c3670d6ca717318e9ea60293bed2a420f5428225c044a715ad] <==
	[INFO] 10.244.0.22:39553 - 46312 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005085145s
	[INFO] 10.244.0.22:42918 - 43798 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005728876s
	[INFO] 10.244.0.22:50580 - 64103 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00592965s
	[INFO] 10.244.0.22:39553 - 26777 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005778102s
	[INFO] 10.244.0.22:45947 - 55629 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006161521s
	[INFO] 10.244.0.22:57132 - 42177 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005681622s
	[INFO] 10.244.0.22:48361 - 14569 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005938849s
	[INFO] 10.244.0.22:43209 - 28443 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006140914s
	[INFO] 10.244.0.22:48229 - 41442 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006096795s
	[INFO] 10.244.0.22:50580 - 27024 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006361908s
	[INFO] 10.244.0.22:48361 - 9869 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006088224s
	[INFO] 10.244.0.22:39553 - 45125 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006380641s
	[INFO] 10.244.0.22:48229 - 11960 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006189522s
	[INFO] 10.244.0.22:42918 - 25196 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006600798s
	[INFO] 10.244.0.22:45947 - 23185 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006067807s
	[INFO] 10.244.0.22:50580 - 16016 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000148233s
	[INFO] 10.244.0.22:39553 - 42491 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000047117s
	[INFO] 10.244.0.22:48361 - 39515 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000081538s
	[INFO] 10.244.0.22:48229 - 65189 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000191458s
	[INFO] 10.244.0.22:43209 - 53387 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006579359s
	[INFO] 10.244.0.22:45947 - 19203 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000182618s
	[INFO] 10.244.0.22:57132 - 27856 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006730471s
	[INFO] 10.244.0.22:42918 - 9194 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000199647s
	[INFO] 10.244.0.22:43209 - 47415 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000077747s
	[INFO] 10.244.0.22:57132 - 41767 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074542s
	
	
	==> describe nodes <==
	Name:               addons-522394
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-522394
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=addons-522394
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_02T11_31_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-522394
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:31:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-522394
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:38:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 11:36:26 +0000   Mon, 02 Dec 2024 11:31:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 11:36:26 +0000   Mon, 02 Dec 2024 11:31:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 11:36:26 +0000   Mon, 02 Dec 2024 11:31:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 11:36:26 +0000   Mon, 02 Dec 2024 11:31:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-522394
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 b86fc5ec93fa45b6a66a36116aa0e647
	  System UUID:                c8f73ecb-2c1a-45b7-87c3-de079fb5e436
	  Boot ID:                    2a9b6797-354b-47aa-b86d-31dcdc265ca8
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  default                     hello-world-app-55bf9c44b4-src62           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m49s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 coredns-7c65d6cfc9-2cr8g                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m26s
	  kube-system                 etcd-addons-522394                         100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m32s
	  kube-system                 kindnet-p2kn5                              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m27s
	  kube-system                 kube-apiserver-addons-522394               250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m32s
	  kube-system                 kube-controller-manager-addons-522394      200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m32s
	  kube-system                 kube-proxy-7vj6f                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m27s
	  kube-system                 kube-scheduler-addons-522394               100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m32s
	  kube-system                 metrics-server-84c5f94fbc-cmfs5            100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         7m22s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	  local-path-storage          local-path-provisioner-86d989889c-qf4zs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 7m22s  kube-proxy       
	  Normal   Starting                 7m32s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m32s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m32s  kubelet          Node addons-522394 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m32s  kubelet          Node addons-522394 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m32s  kubelet          Node addons-522394 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m28s  node-controller  Node addons-522394 event: Registered Node addons-522394 in Controller
	  Normal   NodeReady                7m8s   kubelet          Node addons-522394 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000801] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000892] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.642890] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024824] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.032587] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.029394] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +7.155032] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 2 11:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e2 ba 02 e3 ad a3 ba 8d 5d 9b 12 72 08 00
	[  +1.007914] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000012] ll header: 00000000: e2 ba 02 e3 ad a3 ba 8d 5d 9b 12 72 08 00
	[  +2.015805] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e2 ba 02 e3 ad a3 ba 8d 5d 9b 12 72 08 00
	[  +4.127504] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: e2 ba 02 e3 ad a3 ba 8d 5d 9b 12 72 08 00
	[Dec 2 11:34] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e2 ba 02 e3 ad a3 ba 8d 5d 9b 12 72 08 00
	[ +16.122279] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 ba 02 e3 ad a3 ba 8d 5d 9b 12 72 08 00
	[ +32.764471] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e2 ba 02 e3 ad a3 ba 8d 5d 9b 12 72 08 00
	
	
	==> etcd [ee5fef32ba1e21f27a8a21df593a04fefa359844449081234317dbd16c88dea5] <==
	{"level":"warn","ts":"2024-12-02T11:31:28.806544Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.757322ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-522394\" ","response":"range_response_count:1 size:5655"}
	{"level":"info","ts":"2024-12-02T11:31:28.806843Z","caller":"traceutil/trace.go:171","msg":"trace[2056605895] range","detail":"{range_begin:/registry/minions/addons-522394; range_end:; response_count:1; response_revision:424; }","duration":"187.058684ms","start":"2024-12-02T11:31:28.619775Z","end":"2024-12-02T11:31:28.806833Z","steps":["trace[2056605895] 'agreement among raft nodes before linearized reading'  (duration: 186.730267ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:31:28.806574Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.234825ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-12-02T11:31:28.807092Z","caller":"traceutil/trace.go:171","msg":"trace[1199302634] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:424; }","duration":"187.751601ms","start":"2024-12-02T11:31:28.619328Z","end":"2024-12-02T11:31:28.807080Z","steps":["trace[1199302634] 'agreement among raft nodes before linearized reading'  (duration: 186.682713ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:31:28.807517Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.612468ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-12-02T11:31:28.807599Z","caller":"traceutil/trace.go:171","msg":"trace[381932111] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; response_count:1; response_revision:424; }","duration":"186.69389ms","start":"2024-12-02T11:31:28.620893Z","end":"2024-12-02T11:31:28.807587Z","steps":["trace[381932111] 'agreement among raft nodes before linearized reading'  (duration: 186.56975ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:31:28.808133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.452086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-12-02T11:31:28.808473Z","caller":"traceutil/trace.go:171","msg":"trace[296080295] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"105.553912ms","start":"2024-12-02T11:31:28.702903Z","end":"2024-12-02T11:31:28.808457Z","steps":["trace[296080295] 'process raft request'  (duration: 105.014251ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T11:31:28.808598Z","caller":"traceutil/trace.go:171","msg":"trace[681393968] transaction","detail":"{read_only:false; response_revision:426; number_of_response:1; }","duration":"101.418513ms","start":"2024-12-02T11:31:28.707173Z","end":"2024-12-02T11:31:28.808591Z","steps":["trace[681393968] 'process raft request'  (duration: 100.804599ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T11:31:28.815520Z","caller":"traceutil/trace.go:171","msg":"trace[1492983176] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:426; }","duration":"114.139526ms","start":"2024-12-02T11:31:28.700670Z","end":"2024-12-02T11:31:28.814809Z","steps":["trace[1492983176] 'agreement among raft nodes before linearized reading'  (duration: 107.343735ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T11:31:29.022068Z","caller":"traceutil/trace.go:171","msg":"trace[1344895774] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"102.581521ms","start":"2024-12-02T11:31:28.919469Z","end":"2024-12-02T11:31:29.022051Z","steps":["trace[1344895774] 'process raft request'  (duration: 95.274828ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:31:29.627269Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.960238ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/gadget/gadget-role-binding\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-02T11:31:29.628691Z","caller":"traceutil/trace.go:171","msg":"trace[933279930] range","detail":"{range_begin:/registry/rolebindings/gadget/gadget-role-binding; range_end:; response_count:0; response_revision:508; }","duration":"104.379146ms","start":"2024-12-02T11:31:29.524290Z","end":"2024-12-02T11:31:29.628669Z","steps":["trace[933279930] 'agreement among raft nodes before linearized reading'  (duration: 92.111489ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T11:31:29.900773Z","caller":"traceutil/trace.go:171","msg":"trace[315314590] transaction","detail":"{read_only:false; response_revision:509; number_of_response:1; }","duration":"184.267399ms","start":"2024-12-02T11:31:29.716484Z","end":"2024-12-02T11:31:29.900752Z","steps":["trace[315314590] 'process raft request'  (duration: 105.458614ms)","trace[315314590] 'compare'  (duration: 78.678611ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-02T11:31:29.902113Z","caller":"traceutil/trace.go:171","msg":"trace[231914416] linearizableReadLoop","detail":"{readStateIndex:523; appliedIndex:520; }","duration":"185.211539ms","start":"2024-12-02T11:31:29.716888Z","end":"2024-12-02T11:31:29.902099Z","steps":["trace[231914416] 'read index received'  (duration: 105.076151ms)","trace[231914416] 'applied index is now lower than readState.Index'  (duration: 80.134644ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-02T11:31:29.903139Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.235365ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" ","response":"range_response_count:1 size:573"}
	{"level":"info","ts":"2024-12-02T11:31:29.908429Z","caller":"traceutil/trace.go:171","msg":"trace[2022297626] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:1; response_revision:516; }","duration":"191.527203ms","start":"2024-12-02T11:31:29.716884Z","end":"2024-12-02T11:31:29.908411Z","steps":["trace[2022297626] 'agreement among raft nodes before linearized reading'  (duration: 186.131649ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T11:31:29.903185Z","caller":"traceutil/trace.go:171","msg":"trace[209334257] transaction","detail":"{read_only:false; response_revision:510; number_of_response:1; }","duration":"186.606746ms","start":"2024-12-02T11:31:29.716566Z","end":"2024-12-02T11:31:29.903173Z","steps":["trace[209334257] 'process raft request'  (duration: 185.266146ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:31:29.908442Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.417349ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/yakd-dashboard/yakd-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-02T11:31:29.909084Z","caller":"traceutil/trace.go:171","msg":"trace[1945374566] range","detail":"{range_begin:/registry/deployments/yakd-dashboard/yakd-dashboard; range_end:; response_count:0; response_revision:516; }","duration":"192.06554ms","start":"2024-12-02T11:31:29.717003Z","end":"2024-12-02T11:31:29.909068Z","steps":["trace[1945374566] 'agreement among raft nodes before linearized reading'  (duration: 191.388367ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T11:31:29.903350Z","caller":"traceutil/trace.go:171","msg":"trace[1853367450] transaction","detail":"{read_only:false; response_revision:512; number_of_response:1; }","duration":"186.386985ms","start":"2024-12-02T11:31:29.716957Z","end":"2024-12-02T11:31:29.903344Z","steps":["trace[1853367450] 'process raft request'  (duration: 184.965319ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T11:31:29.903265Z","caller":"traceutil/trace.go:171","msg":"trace[1147361934] transaction","detail":"{read_only:false; response_revision:511; number_of_response:1; }","duration":"186.562916ms","start":"2024-12-02T11:31:29.716686Z","end":"2024-12-02T11:31:29.903249Z","steps":["trace[1147361934] 'process raft request'  (duration: 185.207937ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:32:48.820440Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.099205ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:474"}
	{"level":"info","ts":"2024-12-02T11:32:48.820514Z","caller":"traceutil/trace.go:171","msg":"trace[1203524728] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:1213; }","duration":"122.187111ms","start":"2024-12-02T11:32:48.698312Z","end":"2024-12-02T11:32:48.820500Z","steps":["trace[1203524728] 'range keys from in-memory index tree'  (duration: 122.013044ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-02T11:32:48.820557Z","caller":"traceutil/trace.go:171","msg":"trace[875834341] transaction","detail":"{read_only:false; response_revision:1214; number_of_response:1; }","duration":"121.646489ms","start":"2024-12-02T11:32:48.698889Z","end":"2024-12-02T11:32:48.820536Z","steps":["trace[875834341] 'process raft request'  (duration: 57.330799ms)","trace[875834341] 'compare'  (duration: 64.180975ms)"],"step_count":2}
	
	
	==> kernel <==
	 11:38:51 up 21 min,  0 users,  load average: 0.44, 0.45, 0.30
	Linux addons-522394 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [4702bd641c5b022199e5d0380d62eff583e4a1d0c037c064c4dafc75532e759c] <==
	I1202 11:36:43.408375       1 main.go:301] handling current node
	I1202 11:36:53.408350       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:36:53.408393       1 main.go:301] handling current node
	I1202 11:37:03.408335       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:37:03.408368       1 main.go:301] handling current node
	I1202 11:37:13.401734       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:37:13.401794       1 main.go:301] handling current node
	I1202 11:37:23.404156       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:37:23.404189       1 main.go:301] handling current node
	I1202 11:37:33.401543       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:37:33.401581       1 main.go:301] handling current node
	I1202 11:37:43.408350       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:37:43.408388       1 main.go:301] handling current node
	I1202 11:37:53.410086       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:37:53.410123       1 main.go:301] handling current node
	I1202 11:38:03.410193       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:38:03.410227       1 main.go:301] handling current node
	I1202 11:38:13.401824       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:38:13.401862       1 main.go:301] handling current node
	I1202 11:38:23.403124       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:38:23.403162       1 main.go:301] handling current node
	I1202 11:38:33.401868       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:38:33.401910       1 main.go:301] handling current node
	I1202 11:38:43.405961       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:38:43.405994       1 main.go:301] handling current node
	
	
	==> kube-apiserver [def246b91f2b5e33e31a78017f253f3795c79ef0bdce11d6f9b39ff71c9db0fc] <==
	E1202 11:33:04.223594       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.91.223:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.91.223:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.91.223:443: connect: connection refused" logger="UnhandledError"
	E1202 11:33:04.225240       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.91.223:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.91.223:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.91.223:443: connect: connection refused" logger="UnhandledError"
	I1202 11:33:04.257410       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1202 11:33:22.348846       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43592: use of closed network connection
	E1202 11:33:22.511236       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43618: use of closed network connection
	I1202 11:33:31.437291       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.173.199"}
	I1202 11:33:37.197862       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1202 11:33:38.314827       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1202 11:33:42.662450       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1202 11:33:42.834492       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.4.66"}
	I1202 11:34:34.595666       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1202 11:34:52.328706       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 11:34:52.328762       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1202 11:34:52.341228       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 11:34:52.341364       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1202 11:34:52.342673       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 11:34:52.342710       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1202 11:34:52.358604       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 11:34:52.358649       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1202 11:34:52.363863       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 11:34:52.363904       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1202 11:34:53.342958       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1202 11:34:53.400473       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1202 11:34:53.409954       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1202 11:36:03.112895       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.181.158"}
	
	
	==> kube-controller-manager [a5b86f11cb86246a41760ee2b34dc464dbbe6aeb8d99874f900eda0beaa5c611] <==
	E1202 11:36:36.033842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:37:07.845419       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:37:07.845465       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:37:11.211087       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:37:11.211129       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:37:11.882162       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:37:11.882204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:37:16.928778       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:37:16.928817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:37:47.311028       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:37:47.311068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:37:49.007158       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:37:49.007204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:37:49.050944       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:37:49.050983       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:37:59.529703       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:37:59.529756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:38:19.889401       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:38:19.889440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:38:31.689452       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:38:31.689491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:38:32.371470       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:38:32.371514       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1202 11:38:36.345091       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1202 11:38:36.345136       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [407fff8704469e2284640e8a5f54b7f0f64000b09e26e5f55e28a8d719327402] <==
	I1202 11:31:26.209045       1 server_linux.go:66] "Using iptables proxy"
	I1202 11:31:27.909979       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1202 11:31:27.910061       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 11:31:28.805396       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 11:31:28.805570       1 server_linux.go:169] "Using iptables Proxier"
	I1202 11:31:28.905704       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 11:31:28.906556       1 server.go:483] "Version info" version="v1.31.2"
	I1202 11:31:28.906922       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 11:31:28.908332       1 config.go:199] "Starting service config controller"
	I1202 11:31:28.908357       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1202 11:31:28.908401       1 config.go:105] "Starting endpoint slice config controller"
	I1202 11:31:28.908413       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1202 11:31:28.908988       1 config.go:328] "Starting node config controller"
	I1202 11:31:28.909009       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1202 11:31:29.108916       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1202 11:31:29.112318       1 shared_informer.go:320] Caches are synced for service config
	I1202 11:31:29.112806       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0251b2cec71bb2ec6bae20b2f73bd57c40e358a43510c1847c0b89dc662b80d0] <==
	E1202 11:31:17.222151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1202 11:31:17.222265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:17.222337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1202 11:31:17.222367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:17.222710       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1202 11:31:17.222730       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:18.052811       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1202 11:31:18.052849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:18.083338       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1202 11:31:18.083378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:18.133975       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1202 11:31:18.134020       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:18.135974       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1202 11:31:18.136005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:18.242945       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1202 11:31:18.242991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:18.265248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1202 11:31:18.265291       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:18.368794       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1202 11:31:18.368831       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:18.393065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1202 11:31:18.393114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1202 11:31:18.474874       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1202 11:31:18.474907       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1202 11:31:20.618175       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 02 11:36:59 addons-522394 kubelet[1638]: E1202 11:36:59.485018    1638 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139419484786387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:36:59 addons-522394 kubelet[1638]: E1202 11:36:59.485071    1638 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139419484786387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:37:09 addons-522394 kubelet[1638]: E1202 11:37:09.486915    1638 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139429486680320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:37:09 addons-522394 kubelet[1638]: E1202 11:37:09.486950    1638 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139429486680320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:37:19 addons-522394 kubelet[1638]: E1202 11:37:19.489593    1638 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139439489391506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:37:19 addons-522394 kubelet[1638]: E1202 11:37:19.489628    1638 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139439489391506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:37:29 addons-522394 kubelet[1638]: E1202 11:37:29.491565    1638 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139449491284955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:37:29 addons-522394 kubelet[1638]: E1202 11:37:29.491621    1638 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139449491284955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:37:39 addons-522394 kubelet[1638]: E1202 11:37:39.494746    1638 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139459494441914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:37:39 addons-522394 kubelet[1638]: E1202 11:37:39.494779    1638 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139459494441914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:37:49 addons-522394 kubelet[1638]: E1202 11:37:49.497521    1638 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139469497270995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:37:49 addons-522394 kubelet[1638]: E1202 11:37:49.497555    1638 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139469497270995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:37:58 addons-522394 kubelet[1638]: I1202 11:37:58.442642    1638 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 11:37:59 addons-522394 kubelet[1638]: E1202 11:37:59.499643    1638 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139479499429307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:37:59 addons-522394 kubelet[1638]: E1202 11:37:59.499683    1638 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139479499429307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:38:09 addons-522394 kubelet[1638]: E1202 11:38:09.502237    1638 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139489501960443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:38:09 addons-522394 kubelet[1638]: E1202 11:38:09.502272    1638 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139489501960443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:38:19 addons-522394 kubelet[1638]: E1202 11:38:19.505037    1638 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139499504753427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:38:19 addons-522394 kubelet[1638]: E1202 11:38:19.505081    1638 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139499504753427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:38:29 addons-522394 kubelet[1638]: E1202 11:38:29.508177    1638 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139509507908328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:38:29 addons-522394 kubelet[1638]: E1202 11:38:29.508218    1638 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139509507908328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:38:39 addons-522394 kubelet[1638]: E1202 11:38:39.511074    1638 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139519510837415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:38:39 addons-522394 kubelet[1638]: E1202 11:38:39.511109    1638 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139519510837415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:38:49 addons-522394 kubelet[1638]: E1202 11:38:49.514098    1638 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139529513835413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:38:49 addons-522394 kubelet[1638]: E1202 11:38:49.514130    1638 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139529513835413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:625798,},InodesUsed:&UInt64Value{Value:242,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [2cf66f4197da94cf92c19f51ce8b19fa55016456ce2724546fb6029163181857] <==
	I1202 11:31:44.852492       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1202 11:31:44.903483       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1202 11:31:44.903536       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1202 11:31:44.911630       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1202 11:31:44.911785       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-522394_46bfd9a2-e490-4934-9b3a-74d022b3a481!
	I1202 11:31:44.911752       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fcfb205a-9f1e-4bc8-a96c-f8c7c0f764b9", APIVersion:"v1", ResourceVersion:"894", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-522394_46bfd9a2-e490-4934-9b3a-74d022b3a481 became leader
	I1202 11:31:45.012695       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-522394_46bfd9a2-e490-4934-9b3a-74d022b3a481!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-522394 -n addons-522394
helpers_test.go:261: (dbg) Run:  kubectl --context addons-522394 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-522394 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (322.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (125.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-093284 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1202 11:49:29.940453   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-093284 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m2.196217136s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:591: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-093284       NotReady   control-plane   8m15s   v1.31.2
	ha-093284-m02   Ready      control-plane   7m55s   v1.31.2
	ha-093284-m04   Ready      <none>          6m39s   v1.31.2

                                                
                                                
-- /stdout --
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:599: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-093284
helpers_test.go:235: (dbg) docker inspect ha-093284:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "16f4bc0b0c0c820c3ad09be303eb4dc9a60ec063091fc826e7ac3f40338ef242",
	        "Created": "2024-12-02T11:42:47.313278416Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 95643,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-02T11:49:13.796581162Z",
	            "FinishedAt": "2024-12-02T11:49:13.073369994Z"
	        },
	        "Image": "sha256:1a0bf2062289d31d12b734a031220306d830691a529a6eae8b4c8f4049e20571",
	        "ResolvConfPath": "/var/lib/docker/containers/16f4bc0b0c0c820c3ad09be303eb4dc9a60ec063091fc826e7ac3f40338ef242/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/16f4bc0b0c0c820c3ad09be303eb4dc9a60ec063091fc826e7ac3f40338ef242/hostname",
	        "HostsPath": "/var/lib/docker/containers/16f4bc0b0c0c820c3ad09be303eb4dc9a60ec063091fc826e7ac3f40338ef242/hosts",
	        "LogPath": "/var/lib/docker/containers/16f4bc0b0c0c820c3ad09be303eb4dc9a60ec063091fc826e7ac3f40338ef242/16f4bc0b0c0c820c3ad09be303eb4dc9a60ec063091fc826e7ac3f40338ef242-json.log",
	        "Name": "/ha-093284",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-093284:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-093284",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1e061b3644f2b28d46e85f7dccd3b993e2d74643956dae98835e627008eb3b65-init/diff:/var/lib/docker/overlay2/098fd1b37996620d1394051c0f2d145ec7cc4c66ec7f899bcd76f461df21801b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1e061b3644f2b28d46e85f7dccd3b993e2d74643956dae98835e627008eb3b65/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1e061b3644f2b28d46e85f7dccd3b993e2d74643956dae98835e627008eb3b65/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1e061b3644f2b28d46e85f7dccd3b993e2d74643956dae98835e627008eb3b65/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-093284",
	                "Source": "/var/lib/docker/volumes/ha-093284/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-093284",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-093284",
	                "name.minikube.sigs.k8s.io": "ha-093284",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "70e337c7c1a0fdc629c41d9823fdc50a6dd0f08c6e0213feeb8da296976c29d6",
	            "SandboxKey": "/var/run/docker/netns/70e337c7c1a0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32833"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-093284": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "236a02a97ab37a7cbcb9add7909b059647b29bbdda1183a89d07a6c4fa57e12d",
	                    "EndpointID": "2f71e9ba11ec7cdab85dd00dc23af22e054160227e785c6a091c8cbc8041de38",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-093284",
	                        "16f4bc0b0c0c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-093284 -n ha-093284
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-093284 logs -n 25: (1.623992998s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-093284 cp ha-093284-m03:/home/docker/cp-test.txt                             | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:45 UTC | 02 Dec 24 11:45 UTC |
	|         | ha-093284-m04:/home/docker/cp-test_ha-093284-m03_ha-093284-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-093284 ssh -n                                                                | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:45 UTC | 02 Dec 24 11:45 UTC |
	|         | ha-093284-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-093284 ssh -n ha-093284-m04 sudo cat                                         | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:45 UTC | 02 Dec 24 11:45 UTC |
	|         | /home/docker/cp-test_ha-093284-m03_ha-093284-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-093284 cp testdata/cp-test.txt                                               | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:45 UTC | 02 Dec 24 11:45 UTC |
	|         | ha-093284-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-093284 ssh -n                                                                | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:45 UTC | 02 Dec 24 11:45 UTC |
	|         | ha-093284-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-093284 cp ha-093284-m04:/home/docker/cp-test.txt                             | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:45 UTC | 02 Dec 24 11:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile164155172/001/cp-test_ha-093284-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-093284 ssh -n                                                                | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:45 UTC | 02 Dec 24 11:45 UTC |
	|         | ha-093284-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-093284 cp ha-093284-m04:/home/docker/cp-test.txt                             | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:45 UTC | 02 Dec 24 11:45 UTC |
	|         | ha-093284:/home/docker/cp-test_ha-093284-m04_ha-093284.txt                      |           |         |         |                     |                     |
	| ssh     | ha-093284 ssh -n                                                                | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:45 UTC | 02 Dec 24 11:45 UTC |
	|         | ha-093284-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-093284 ssh -n ha-093284 sudo cat                                             | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:45 UTC | 02 Dec 24 11:45 UTC |
	|         | /home/docker/cp-test_ha-093284-m04_ha-093284.txt                                |           |         |         |                     |                     |
	| cp      | ha-093284 cp ha-093284-m04:/home/docker/cp-test.txt                             | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:45 UTC | 02 Dec 24 11:45 UTC |
	|         | ha-093284-m02:/home/docker/cp-test_ha-093284-m04_ha-093284-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-093284 ssh -n                                                                | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:45 UTC | 02 Dec 24 11:45 UTC |
	|         | ha-093284-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-093284 ssh -n ha-093284-m02 sudo cat                                         | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:45 UTC | 02 Dec 24 11:45 UTC |
	|         | /home/docker/cp-test_ha-093284-m04_ha-093284-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-093284 cp ha-093284-m04:/home/docker/cp-test.txt                             | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:45 UTC | 02 Dec 24 11:45 UTC |
	|         | ha-093284-m03:/home/docker/cp-test_ha-093284-m04_ha-093284-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-093284 ssh -n                                                                | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:45 UTC | 02 Dec 24 11:45 UTC |
	|         | ha-093284-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-093284 ssh -n ha-093284-m03 sudo cat                                         | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:45 UTC | 02 Dec 24 11:45 UTC |
	|         | /home/docker/cp-test_ha-093284-m04_ha-093284-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-093284 node stop m02 -v=7                                                    | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:45 UTC | 02 Dec 24 11:45 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-093284 node start m02 -v=7                                                   | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:45 UTC | 02 Dec 24 11:45 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-093284 -v=7                                                          | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:45 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-093284 -v=7                                                               | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:45 UTC | 02 Dec 24 11:46 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-093284 --wait=true -v=7                                                   | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:46 UTC | 02 Dec 24 11:48 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-093284                                                               | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:48 UTC |                     |
	| node    | ha-093284 node delete m03 -v=7                                                  | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:48 UTC | 02 Dec 24 11:48 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-093284 stop -v=7                                                             | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:48 UTC | 02 Dec 24 11:49 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-093284 --wait=true                                                        | ha-093284 | jenkins | v1.34.0 | 02 Dec 24 11:49 UTC | 02 Dec 24 11:51 UTC |
	|         | -v=7 --alsologtostderr                                                          |           |         |         |                     |                     |
	|         | --driver=docker                                                                 |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                        |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 11:49:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 11:49:13.505174   95364 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:49:13.505327   95364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:49:13.505339   95364 out.go:358] Setting ErrFile to fd 2...
	I1202 11:49:13.505346   95364 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:49:13.505534   95364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6540/.minikube/bin
	I1202 11:49:13.506132   95364 out.go:352] Setting JSON to false
	I1202 11:49:13.507074   95364 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1905,"bootTime":1733138249,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 11:49:13.507180   95364 start.go:139] virtualization: kvm guest
	I1202 11:49:13.509844   95364 out.go:177] * [ha-093284] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 11:49:13.511579   95364 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 11:49:13.511584   95364 notify.go:220] Checking for updates...
	I1202 11:49:13.513040   95364 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 11:49:13.514529   95364 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6540/kubeconfig
	I1202 11:49:13.516018   95364 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6540/.minikube
	I1202 11:49:13.518017   95364 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 11:49:13.519460   95364 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 11:49:13.521576   95364 config.go:182] Loaded profile config "ha-093284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:49:13.522211   95364 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 11:49:13.543844   95364 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1202 11:49:13.543954   95364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 11:49:13.590152   95364 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:42 SystemTime:2024-12-02 11:49:13.581045096 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 11:49:13.590263   95364 docker.go:318] overlay module found
	I1202 11:49:13.592120   95364 out.go:177] * Using the docker driver based on existing profile
	I1202 11:49:13.593687   95364 start.go:297] selected driver: docker
	I1202 11:49:13.593709   95364 start.go:901] validating driver "docker" against &{Name:ha-093284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-093284 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:49:13.593837   95364 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 11:49:13.593913   95364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 11:49:13.640621   95364 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:42 SystemTime:2024-12-02 11:49:13.632035687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 11:49:13.641533   95364 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:49:13.641573   95364 cni.go:84] Creating CNI manager for ""
	I1202 11:49:13.641660   95364 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1202 11:49:13.641727   95364 start.go:340] cluster config:
	{Name:ha-093284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-093284 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvi
dia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPau
seInterval:1m0s}
	I1202 11:49:13.643900   95364 out.go:177] * Starting "ha-093284" primary control-plane node in "ha-093284" cluster
	I1202 11:49:13.645394   95364 cache.go:121] Beginning downloading kic base image for docker with crio
	I1202 11:49:13.647098   95364 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1202 11:49:13.648485   95364 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:49:13.648529   95364 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 11:49:13.648542   95364 cache.go:56] Caching tarball of preloaded images
	I1202 11:49:13.648590   95364 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1202 11:49:13.648633   95364 preload.go:172] Found /home/jenkins/minikube-integration/20033-6540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:49:13.648642   95364 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:49:13.648765   95364 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/config.json ...
	I1202 11:49:13.667665   95364 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
	I1202 11:49:13.667690   95364 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
	I1202 11:49:13.667708   95364 cache.go:194] Successfully downloaded all kic artifacts
	I1202 11:49:13.667743   95364 start.go:360] acquireMachinesLock for ha-093284: {Name:mk5bec9883f798ad4f9e5606ec0fa396a70e9f71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:49:13.667809   95364 start.go:364] duration metric: took 44.2µs to acquireMachinesLock for "ha-093284"
	I1202 11:49:13.667832   95364 start.go:96] Skipping create...Using existing machine configuration
	I1202 11:49:13.667839   95364 fix.go:54] fixHost starting: 
	I1202 11:49:13.668052   95364 cli_runner.go:164] Run: docker container inspect ha-093284 --format={{.State.Status}}
	I1202 11:49:13.684523   95364 fix.go:112] recreateIfNeeded on ha-093284: state=Stopped err=<nil>
	W1202 11:49:13.684552   95364 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 11:49:13.686803   95364 out.go:177] * Restarting existing docker container for "ha-093284" ...
	I1202 11:49:13.688236   95364 cli_runner.go:164] Run: docker start ha-093284
	I1202 11:49:13.971048   95364 cli_runner.go:164] Run: docker container inspect ha-093284 --format={{.State.Status}}
	I1202 11:49:13.989428   95364 kic.go:430] container "ha-093284" state is running.
	I1202 11:49:13.989816   95364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-093284
	I1202 11:49:14.008699   95364 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/config.json ...
	I1202 11:49:14.008939   95364 machine.go:93] provisionDockerMachine start ...
	I1202 11:49:14.008995   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284
	I1202 11:49:14.028437   95364 main.go:141] libmachine: Using SSH client type: native
	I1202 11:49:14.028745   95364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32829 <nil> <nil>}
	I1202 11:49:14.028761   95364 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 11:49:14.029418   95364 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52030->127.0.0.1:32829: read: connection reset by peer
	I1202 11:49:17.160069   95364 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-093284
	
	I1202 11:49:17.160118   95364 ubuntu.go:169] provisioning hostname "ha-093284"
	I1202 11:49:17.160207   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284
	I1202 11:49:17.179878   95364 main.go:141] libmachine: Using SSH client type: native
	I1202 11:49:17.180112   95364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32829 <nil> <nil>}
	I1202 11:49:17.180129   95364 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-093284 && echo "ha-093284" | sudo tee /etc/hostname
	I1202 11:49:17.319586   95364 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-093284
	
	I1202 11:49:17.319682   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284
	I1202 11:49:17.338264   95364 main.go:141] libmachine: Using SSH client type: native
	I1202 11:49:17.338484   95364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32829 <nil> <nil>}
	I1202 11:49:17.338504   95364 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-093284' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-093284/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-093284' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:49:17.464471   95364 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:49:17.464508   95364 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6540/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6540/.minikube}
	I1202 11:49:17.464557   95364 ubuntu.go:177] setting up certificates
	I1202 11:49:17.464571   95364 provision.go:84] configureAuth start
	I1202 11:49:17.464638   95364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-093284
	I1202 11:49:17.482861   95364 provision.go:143] copyHostCerts
	I1202 11:49:17.482910   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20033-6540/.minikube/ca.pem
	I1202 11:49:17.482954   95364 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6540/.minikube/ca.pem, removing ...
	I1202 11:49:17.482964   95364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.pem
	I1202 11:49:17.483057   95364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6540/.minikube/ca.pem (1078 bytes)
	I1202 11:49:17.483165   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20033-6540/.minikube/cert.pem
	I1202 11:49:17.483195   95364 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6540/.minikube/cert.pem, removing ...
	I1202 11:49:17.483202   95364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6540/.minikube/cert.pem
	I1202 11:49:17.483229   95364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6540/.minikube/cert.pem (1123 bytes)
	I1202 11:49:17.483328   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20033-6540/.minikube/key.pem
	I1202 11:49:17.483347   95364 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6540/.minikube/key.pem, removing ...
	I1202 11:49:17.483351   95364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6540/.minikube/key.pem
	I1202 11:49:17.483373   95364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6540/.minikube/key.pem (1679 bytes)
	I1202 11:49:17.483437   95364 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca-key.pem org=jenkins.ha-093284 san=[127.0.0.1 192.168.49.2 ha-093284 localhost minikube]
	I1202 11:49:17.589486   95364 provision.go:177] copyRemoteCerts
	I1202 11:49:17.589542   95364 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:49:17.589574   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284
	I1202 11:49:17.607053   95364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/ha-093284/id_rsa Username:docker}
	I1202 11:49:17.700881   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 11:49:17.700949   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 11:49:17.724141   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 11:49:17.724253   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1202 11:49:17.746472   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 11:49:17.746573   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 11:49:17.769009   95364 provision.go:87] duration metric: took 304.401714ms to configureAuth
	I1202 11:49:17.769042   95364 ubuntu.go:193] setting minikube options for container-runtime
	I1202 11:49:17.769290   95364 config.go:182] Loaded profile config "ha-093284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:49:17.769413   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284
	I1202 11:49:17.787596   95364 main.go:141] libmachine: Using SSH client type: native
	I1202 11:49:17.787762   95364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32829 <nil> <nil>}
	I1202 11:49:17.787777   95364 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:49:18.132247   95364 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:49:18.132358   95364 machine.go:96] duration metric: took 4.123396953s to provisionDockerMachine
	I1202 11:49:18.132376   95364 start.go:293] postStartSetup for "ha-093284" (driver="docker")
	I1202 11:49:18.132390   95364 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:49:18.132521   95364 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:49:18.132586   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284
	I1202 11:49:18.151821   95364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/ha-093284/id_rsa Username:docker}
	I1202 11:49:18.250170   95364 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:49:18.253384   95364 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 11:49:18.253422   95364 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1202 11:49:18.253430   95364 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1202 11:49:18.253436   95364 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1202 11:49:18.253450   95364 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6540/.minikube/addons for local assets ...
	I1202 11:49:18.253512   95364 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6540/.minikube/files for local assets ...
	I1202 11:49:18.253582   95364 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6540/.minikube/files/etc/ssl/certs/132992.pem -> 132992.pem in /etc/ssl/certs
	I1202 11:49:18.253591   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/files/etc/ssl/certs/132992.pem -> /etc/ssl/certs/132992.pem
	I1202 11:49:18.253667   95364 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 11:49:18.261830   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/files/etc/ssl/certs/132992.pem --> /etc/ssl/certs/132992.pem (1708 bytes)
	I1202 11:49:18.284308   95364 start.go:296] duration metric: took 151.914938ms for postStartSetup
	I1202 11:49:18.284402   95364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 11:49:18.284463   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284
	I1202 11:49:18.302528   95364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/ha-093284/id_rsa Username:docker}
	I1202 11:49:18.393151   95364 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 11:49:18.397391   95364 fix.go:56] duration metric: took 4.729546532s for fixHost
	I1202 11:49:18.397421   95364 start.go:83] releasing machines lock for "ha-093284", held for 4.729599542s
	I1202 11:49:18.397479   95364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-093284
	I1202 11:49:18.415175   95364 ssh_runner.go:195] Run: cat /version.json
	I1202 11:49:18.415221   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284
	I1202 11:49:18.415243   95364 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:49:18.415297   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284
	I1202 11:49:18.432651   95364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/ha-093284/id_rsa Username:docker}
	I1202 11:49:18.433310   95364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/ha-093284/id_rsa Username:docker}
	I1202 11:49:18.593094   95364 ssh_runner.go:195] Run: systemctl --version
	I1202 11:49:18.598078   95364 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:49:18.735095   95364 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 11:49:18.739470   95364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:49:18.747949   95364 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1202 11:49:18.748054   95364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:49:18.756187   95364 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 11:49:18.756216   95364 start.go:495] detecting cgroup driver to use...
	I1202 11:49:18.756247   95364 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 11:49:18.756310   95364 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:49:18.767552   95364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:49:18.778128   95364 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:49:18.778191   95364 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:49:18.789836   95364 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:49:18.800610   95364 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:49:18.872889   95364 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:49:18.948408   95364 docker.go:233] disabling docker service ...
	I1202 11:49:18.948471   95364 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:49:18.960066   95364 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:49:18.970249   95364 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:49:19.045050   95364 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:49:19.117864   95364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:49:19.128738   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:49:19.144041   95364 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:49:19.144098   95364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:49:19.153475   95364 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:49:19.153541   95364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:49:19.163080   95364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:49:19.172998   95364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:49:19.182136   95364 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:49:19.190679   95364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:49:19.199912   95364 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:49:19.208382   95364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:49:19.217400   95364 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:49:19.225159   95364 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:49:19.232622   95364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:49:19.305291   95364 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:49:19.388569   95364 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:49:19.388629   95364 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:49:19.392165   95364 start.go:563] Will wait 60s for crictl version
	I1202 11:49:19.392247   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:49:19.395463   95364 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:49:19.428731   95364 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1202 11:49:19.428810   95364 ssh_runner.go:195] Run: crio --version
	I1202 11:49:19.462849   95364 ssh_runner.go:195] Run: crio --version
	I1202 11:49:19.498564   95364 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1202 11:49:19.500085   95364 cli_runner.go:164] Run: docker network inspect ha-093284 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 11:49:19.516989   95364 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 11:49:19.520778   95364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:49:19.531412   95364 kubeadm.go:883] updating cluster {Name:ha-093284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-093284 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevi
rt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 11:49:19.531555   95364 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:49:19.531600   95364 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:49:19.572566   95364 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 11:49:19.572590   95364 crio.go:433] Images already preloaded, skipping extraction
	I1202 11:49:19.572633   95364 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 11:49:19.604898   95364 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 11:49:19.604920   95364 cache_images.go:84] Images are preloaded, skipping loading
	I1202 11:49:19.604928   95364 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1202 11:49:19.605029   95364 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-093284 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-093284 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:49:19.605090   95364 ssh_runner.go:195] Run: crio config
	I1202 11:49:19.645265   95364 cni.go:84] Creating CNI manager for ""
	I1202 11:49:19.645283   95364 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1202 11:49:19.645292   95364 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1202 11:49:19.645313   95364 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-093284 NodeName:ha-093284 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 11:49:19.645486   95364 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-093284"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 11:49:19.645509   95364 kube-vip.go:115] generating kube-vip config ...
	I1202 11:49:19.645558   95364 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 11:49:19.656952   95364 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 11:49:19.657073   95364 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 11:49:19.657128   95364 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:49:19.665017   95364 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 11:49:19.665087   95364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1202 11:49:19.673067   95364 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1202 11:49:19.689168   95364 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:49:19.705928   95364 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2283 bytes)
	I1202 11:49:19.722162   95364 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 11:49:19.738389   95364 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 11:49:19.741543   95364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:49:19.751697   95364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:49:19.826010   95364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:49:19.838675   95364 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284 for IP: 192.168.49.2
	I1202 11:49:19.838699   95364 certs.go:194] generating shared ca certs ...
	I1202 11:49:19.838715   95364 certs.go:226] acquiring lock for ca certs: {Name:mkb9f54a1a5b06ba02335d6260145758dc70e4bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:49:19.838867   95364 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.key
	I1202 11:49:19.838920   95364 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.key
	I1202 11:49:19.838935   95364 certs.go:256] generating profile certs ...
	I1202 11:49:19.839065   95364 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/client.key
	I1202 11:49:19.839104   95364 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/apiserver.key.c00deb57
	I1202 11:49:19.839133   95364 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/apiserver.crt.c00deb57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I1202 11:49:19.956629   95364 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/apiserver.crt.c00deb57 ...
	I1202 11:49:19.956660   95364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/apiserver.crt.c00deb57: {Name:mkac03746bfcb9deb2c576ac46a769fda8f6b938 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:49:19.956824   95364 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/apiserver.key.c00deb57 ...
	I1202 11:49:19.956836   95364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/apiserver.key.c00deb57: {Name:mkd6f82aebbd40045cf2bd88e8f6dc44315abf21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:49:19.956925   95364 certs.go:381] copying /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/apiserver.crt.c00deb57 -> /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/apiserver.crt
	I1202 11:49:19.957097   95364 certs.go:385] copying /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/apiserver.key.c00deb57 -> /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/apiserver.key
	I1202 11:49:19.957227   95364 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/proxy-client.key
	I1202 11:49:19.957243   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 11:49:19.957255   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 11:49:19.957268   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 11:49:19.957281   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 11:49:19.957293   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 11:49:19.957310   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 11:49:19.957322   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 11:49:19.957334   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 11:49:19.957379   95364 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/13299.pem (1338 bytes)
	W1202 11:49:19.957406   95364 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6540/.minikube/certs/13299_empty.pem, impossibly tiny 0 bytes
	I1202 11:49:19.957416   95364 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:49:19.957438   95364 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem (1078 bytes)
	I1202 11:49:19.957459   95364 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:49:19.957480   95364 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/key.pem (1679 bytes)
	I1202 11:49:19.957515   95364 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/files/etc/ssl/certs/132992.pem (1708 bytes)
	I1202 11:49:19.957536   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:49:19.957547   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/13299.pem -> /usr/share/ca-certificates/13299.pem
	I1202 11:49:19.957556   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/files/etc/ssl/certs/132992.pem -> /usr/share/ca-certificates/132992.pem
	I1202 11:49:19.958105   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:49:19.981111   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:49:20.003642   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:49:20.026702   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 11:49:20.048908   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 11:49:20.070829   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 11:49:20.093129   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 11:49:20.115305   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 11:49:20.138167   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:49:20.160685   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/certs/13299.pem --> /usr/share/ca-certificates/13299.pem (1338 bytes)
	I1202 11:49:20.182866   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/files/etc/ssl/certs/132992.pem --> /usr/share/ca-certificates/132992.pem (1708 bytes)
	I1202 11:49:20.205622   95364 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 11:49:20.222155   95364 ssh_runner.go:195] Run: openssl version
	I1202 11:49:20.227279   95364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:49:20.236375   95364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:49:20.239650   95364 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:49:20.239708   95364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:49:20.246196   95364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:49:20.255712   95364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13299.pem && ln -fs /usr/share/ca-certificates/13299.pem /etc/ssl/certs/13299.pem"
	I1202 11:49:20.265304   95364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13299.pem
	I1202 11:49:20.268901   95364 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:39 /usr/share/ca-certificates/13299.pem
	I1202 11:49:20.268957   95364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13299.pem
	I1202 11:49:20.275563   95364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13299.pem /etc/ssl/certs/51391683.0"
	I1202 11:49:20.284434   95364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132992.pem && ln -fs /usr/share/ca-certificates/132992.pem /etc/ssl/certs/132992.pem"
	I1202 11:49:20.294256   95364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132992.pem
	I1202 11:49:20.297772   95364 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:39 /usr/share/ca-certificates/132992.pem
	I1202 11:49:20.297862   95364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132992.pem
	I1202 11:49:20.304517   95364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 11:49:20.313221   95364 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:49:20.316711   95364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 11:49:20.323104   95364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 11:49:20.329699   95364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 11:49:20.336235   95364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 11:49:20.342786   95364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 11:49:20.349386   95364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 11:49:20.355760   95364 kubeadm.go:392] StartCluster: {Name:ha-093284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-093284 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:
false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketV
MnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:49:20.355892   95364 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 11:49:20.355957   95364 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 11:49:20.390186   95364 cri.go:89] found id: ""
	I1202 11:49:20.390265   95364 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 11:49:20.398897   95364 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1202 11:49:20.398919   95364 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1202 11:49:20.398960   95364 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 11:49:20.407098   95364 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 11:49:20.407522   95364 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-093284" does not appear in /home/jenkins/minikube-integration/20033-6540/kubeconfig
	I1202 11:49:20.407644   95364 kubeconfig.go:62] /home/jenkins/minikube-integration/20033-6540/kubeconfig needs updating (will repair): [kubeconfig missing "ha-093284" cluster setting kubeconfig missing "ha-093284" context setting]
	I1202 11:49:20.407894   95364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/kubeconfig: {Name:mk5ee3d9b6afe00d14254b3bb7ff913980280999 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:49:20.408353   95364 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20033-6540/kubeconfig
	I1202 11:49:20.408583   95364 kapi.go:59] client config for ha-093284: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/client.crt", KeyFile:"/home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/client.key", CAFile:"/home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 11:49:20.408993   95364 cert_rotation.go:140] Starting client certificate rotation controller
	I1202 11:49:20.409167   95364 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 11:49:20.418081   95364 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I1202 11:49:20.418104   95364 kubeadm.go:597] duration metric: took 19.176915ms to restartPrimaryControlPlane
	I1202 11:49:20.418114   95364 kubeadm.go:394] duration metric: took 62.36194ms to StartCluster
	I1202 11:49:20.418131   95364 settings.go:142] acquiring lock: {Name:mkd94da5b026832ad8b1eceae7944b5245757344 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:49:20.418210   95364 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20033-6540/kubeconfig
	I1202 11:49:20.418825   95364 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/kubeconfig: {Name:mk5ee3d9b6afe00d14254b3bb7ff913980280999 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:49:20.419028   95364 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:49:20.419047   95364 start.go:241] waiting for startup goroutines ...
	I1202 11:49:20.419060   95364 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 11:49:20.419237   95364 config.go:182] Loaded profile config "ha-093284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:49:20.421627   95364 out.go:177] * Enabled addons: 
	I1202 11:49:20.423028   95364 addons.go:510] duration metric: took 3.97167ms for enable addons: enabled=[]
	I1202 11:49:20.423054   95364 start.go:246] waiting for cluster config update ...
	I1202 11:49:20.423062   95364 start.go:255] writing updated cluster config ...
	I1202 11:49:20.424767   95364 out.go:201] 
	I1202 11:49:20.426202   95364 config.go:182] Loaded profile config "ha-093284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:49:20.426296   95364 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/config.json ...
	I1202 11:49:20.427958   95364 out.go:177] * Starting "ha-093284-m02" control-plane node in "ha-093284" cluster
	I1202 11:49:20.429144   95364 cache.go:121] Beginning downloading kic base image for docker with crio
	I1202 11:49:20.430393   95364 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1202 11:49:20.431778   95364 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:49:20.431795   95364 cache.go:56] Caching tarball of preloaded images
	I1202 11:49:20.431839   95364 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1202 11:49:20.431865   95364 preload.go:172] Found /home/jenkins/minikube-integration/20033-6540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:49:20.431873   95364 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:49:20.431952   95364 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/config.json ...
	I1202 11:49:20.451565   95364 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
	I1202 11:49:20.451590   95364 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
	I1202 11:49:20.451609   95364 cache.go:194] Successfully downloaded all kic artifacts
	I1202 11:49:20.451641   95364 start.go:360] acquireMachinesLock for ha-093284-m02: {Name:mka5acf52e0f5d5e37161160c285c93fcc73d7dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:49:20.451715   95364 start.go:364] duration metric: took 53.127µs to acquireMachinesLock for "ha-093284-m02"
	I1202 11:49:20.451742   95364 start.go:96] Skipping create...Using existing machine configuration
	I1202 11:49:20.451750   95364 fix.go:54] fixHost starting: m02
	I1202 11:49:20.452038   95364 cli_runner.go:164] Run: docker container inspect ha-093284-m02 --format={{.State.Status}}
	I1202 11:49:20.469544   95364 fix.go:112] recreateIfNeeded on ha-093284-m02: state=Stopped err=<nil>
	W1202 11:49:20.469579   95364 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 11:49:20.471500   95364 out.go:177] * Restarting existing docker container for "ha-093284-m02" ...
	I1202 11:49:20.472880   95364 cli_runner.go:164] Run: docker start ha-093284-m02
	I1202 11:49:20.742420   95364 cli_runner.go:164] Run: docker container inspect ha-093284-m02 --format={{.State.Status}}
	I1202 11:49:20.761897   95364 kic.go:430] container "ha-093284-m02" state is running.
	I1202 11:49:20.762241   95364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-093284-m02
	I1202 11:49:20.780747   95364 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/config.json ...
	I1202 11:49:20.781045   95364 machine.go:93] provisionDockerMachine start ...
	I1202 11:49:20.781129   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284-m02
	I1202 11:49:20.799412   95364 main.go:141] libmachine: Using SSH client type: native
	I1202 11:49:20.799669   95364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32834 <nil> <nil>}
	I1202 11:49:20.799691   95364 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 11:49:20.800542   95364 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52440->127.0.0.1:32834: read: connection reset by peer
	I1202 11:49:23.927866   95364 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-093284-m02
	
	I1202 11:49:23.927895   95364 ubuntu.go:169] provisioning hostname "ha-093284-m02"
	I1202 11:49:23.927959   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284-m02
	I1202 11:49:23.946682   95364 main.go:141] libmachine: Using SSH client type: native
	I1202 11:49:23.946885   95364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32834 <nil> <nil>}
	I1202 11:49:23.946901   95364 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-093284-m02 && echo "ha-093284-m02" | sudo tee /etc/hostname
	I1202 11:49:24.082975   95364 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-093284-m02
	
	I1202 11:49:24.083054   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284-m02
	I1202 11:49:24.100035   95364 main.go:141] libmachine: Using SSH client type: native
	I1202 11:49:24.100295   95364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32834 <nil> <nil>}
	I1202 11:49:24.100324   95364 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-093284-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-093284-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-093284-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:49:24.228396   95364 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:49:24.228427   95364 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6540/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6540/.minikube}
	I1202 11:49:24.228444   95364 ubuntu.go:177] setting up certificates
	I1202 11:49:24.228453   95364 provision.go:84] configureAuth start
	I1202 11:49:24.228512   95364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-093284-m02
	I1202 11:49:24.245125   95364 provision.go:143] copyHostCerts
	I1202 11:49:24.245164   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20033-6540/.minikube/ca.pem
	I1202 11:49:24.245193   95364 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6540/.minikube/ca.pem, removing ...
	I1202 11:49:24.245201   95364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.pem
	I1202 11:49:24.245273   95364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6540/.minikube/ca.pem (1078 bytes)
	I1202 11:49:24.245351   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20033-6540/.minikube/cert.pem
	I1202 11:49:24.245370   95364 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6540/.minikube/cert.pem, removing ...
	I1202 11:49:24.245376   95364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6540/.minikube/cert.pem
	I1202 11:49:24.245402   95364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6540/.minikube/cert.pem (1123 bytes)
	I1202 11:49:24.245445   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20033-6540/.minikube/key.pem
	I1202 11:49:24.245461   95364 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6540/.minikube/key.pem, removing ...
	I1202 11:49:24.245467   95364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6540/.minikube/key.pem
	I1202 11:49:24.245487   95364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6540/.minikube/key.pem (1679 bytes)
	I1202 11:49:24.245535   95364 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca-key.pem org=jenkins.ha-093284-m02 san=[127.0.0.1 192.168.49.3 ha-093284-m02 localhost minikube]
	I1202 11:49:24.319628   95364 provision.go:177] copyRemoteCerts
	I1202 11:49:24.319690   95364 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:49:24.319726   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284-m02
	I1202 11:49:24.336522   95364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32834 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/ha-093284-m02/id_rsa Username:docker}
	I1202 11:49:24.428899   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 11:49:24.428954   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 11:49:24.450868   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 11:49:24.450946   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 11:49:24.473377   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 11:49:24.473447   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 11:49:24.495176   95364 provision.go:87] duration metric: took 266.707047ms to configureAuth
	I1202 11:49:24.495217   95364 ubuntu.go:193] setting minikube options for container-runtime
	I1202 11:49:24.495466   95364 config.go:182] Loaded profile config "ha-093284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:49:24.495568   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284-m02
	I1202 11:49:24.512992   95364 main.go:141] libmachine: Using SSH client type: native
	I1202 11:49:24.513180   95364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32834 <nil> <nil>}
	I1202 11:49:24.513195   95364 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:49:24.850042   95364 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:49:24.850082   95364 machine.go:96] duration metric: took 4.0690085s to provisionDockerMachine
	I1202 11:49:24.850096   95364 start.go:293] postStartSetup for "ha-093284-m02" (driver="docker")
	I1202 11:49:24.850110   95364 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:49:24.850174   95364 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:49:24.850212   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284-m02
	I1202 11:49:24.867047   95364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32834 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/ha-093284-m02/id_rsa Username:docker}
	I1202 11:49:24.960954   95364 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:49:24.963943   95364 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 11:49:24.963972   95364 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1202 11:49:24.963980   95364 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1202 11:49:24.963986   95364 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1202 11:49:24.963995   95364 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6540/.minikube/addons for local assets ...
	I1202 11:49:24.964057   95364 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6540/.minikube/files for local assets ...
	I1202 11:49:24.964122   95364 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6540/.minikube/files/etc/ssl/certs/132992.pem -> 132992.pem in /etc/ssl/certs
	I1202 11:49:24.964132   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/files/etc/ssl/certs/132992.pem -> /etc/ssl/certs/132992.pem
	I1202 11:49:24.964210   95364 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 11:49:24.972143   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/files/etc/ssl/certs/132992.pem --> /etc/ssl/certs/132992.pem (1708 bytes)
	I1202 11:49:24.993820   95364 start.go:296] duration metric: took 143.707979ms for postStartSetup
	I1202 11:49:24.993897   95364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 11:49:24.993929   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284-m02
	I1202 11:49:25.011303   95364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32834 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/ha-093284-m02/id_rsa Username:docker}
	I1202 11:49:25.101252   95364 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 11:49:25.105691   95364 fix.go:56] duration metric: took 4.653936039s for fixHost
	I1202 11:49:25.105724   95364 start.go:83] releasing machines lock for "ha-093284-m02", held for 4.653995378s
	I1202 11:49:25.105842   95364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-093284-m02
	I1202 11:49:25.124809   95364 out.go:177] * Found network options:
	I1202 11:49:25.126384   95364 out.go:177]   - NO_PROXY=192.168.49.2
	W1202 11:49:25.127740   95364 proxy.go:119] fail to check proxy env: Error ip not in block
	W1202 11:49:25.127778   95364 proxy.go:119] fail to check proxy env: Error ip not in block
	I1202 11:49:25.127843   95364 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:49:25.127877   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284-m02
	I1202 11:49:25.127913   95364 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:49:25.127972   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284-m02
	I1202 11:49:25.145661   95364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32834 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/ha-093284-m02/id_rsa Username:docker}
	I1202 11:49:25.145985   95364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32834 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/ha-093284-m02/id_rsa Username:docker}
	I1202 11:49:25.367683   95364 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 11:49:25.372380   95364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:49:25.380630   95364 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1202 11:49:25.380696   95364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:49:25.389151   95364 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 11:49:25.389172   95364 start.go:495] detecting cgroup driver to use...
	I1202 11:49:25.389201   95364 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 11:49:25.389233   95364 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:49:25.399928   95364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:49:25.410041   95364 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:49:25.410090   95364 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:49:25.422014   95364 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:49:25.436771   95364 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:49:25.731906   95364 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:49:26.128806   95364 docker.go:233] disabling docker service ...
	I1202 11:49:26.128867   95364 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:49:26.203548   95364 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:49:26.215822   95364 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:49:26.531167   95364 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:49:26.826964   95364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:49:26.847117   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:49:26.942058   95364 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:49:26.942122   95364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:49:26.955269   95364 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:49:26.955331   95364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:49:27.011944   95364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:49:27.024311   95364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:49:27.036797   95364 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:49:27.048532   95364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:49:27.114257   95364 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:49:27.126737   95364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:49:27.209371   95364 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:49:27.220190   95364 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:49:27.228905   95364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:49:27.529524   95364 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:49:27.826268   95364 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:49:27.826360   95364 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:49:27.830658   95364 start.go:563] Will wait 60s for crictl version
	I1202 11:49:27.830716   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:49:27.833888   95364 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:49:27.867328   95364 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1202 11:49:27.867397   95364 ssh_runner.go:195] Run: crio --version
	I1202 11:49:27.908964   95364 ssh_runner.go:195] Run: crio --version
	I1202 11:49:27.953372   95364 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1202 11:49:27.954647   95364 out.go:177]   - env NO_PROXY=192.168.49.2
	I1202 11:49:27.955909   95364 cli_runner.go:164] Run: docker network inspect ha-093284 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 11:49:27.974000   95364 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 11:49:27.977825   95364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:49:27.988545   95364 mustload.go:65] Loading cluster: ha-093284
	I1202 11:49:27.988833   95364 config.go:182] Loaded profile config "ha-093284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:49:27.989154   95364 cli_runner.go:164] Run: docker container inspect ha-093284 --format={{.State.Status}}
	I1202 11:49:28.009002   95364 host.go:66] Checking if "ha-093284" exists ...
	I1202 11:49:28.009268   95364 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284 for IP: 192.168.49.3
	I1202 11:49:28.009280   95364 certs.go:194] generating shared ca certs ...
	I1202 11:49:28.009301   95364 certs.go:226] acquiring lock for ca certs: {Name:mkb9f54a1a5b06ba02335d6260145758dc70e4bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:49:28.009424   95364 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.key
	I1202 11:49:28.009480   95364 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.key
	I1202 11:49:28.009494   95364 certs.go:256] generating profile certs ...
	I1202 11:49:28.009592   95364 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/client.key
	I1202 11:49:28.009660   95364 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/apiserver.key.792445fb
	I1202 11:49:28.009713   95364 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/proxy-client.key
	I1202 11:49:28.009727   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 11:49:28.009748   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 11:49:28.009769   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 11:49:28.009787   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 11:49:28.009807   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1202 11:49:28.009826   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1202 11:49:28.009849   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1202 11:49:28.009877   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1202 11:49:28.009957   95364 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/13299.pem (1338 bytes)
	W1202 11:49:28.010006   95364 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6540/.minikube/certs/13299_empty.pem, impossibly tiny 0 bytes
	I1202 11:49:28.010020   95364 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:49:28.010057   95364 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem (1078 bytes)
	I1202 11:49:28.010092   95364 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:49:28.010124   95364 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/key.pem (1679 bytes)
	I1202 11:49:28.010182   95364 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/files/etc/ssl/certs/132992.pem (1708 bytes)
	I1202 11:49:28.010224   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/files/etc/ssl/certs/132992.pem -> /usr/share/ca-certificates/132992.pem
	I1202 11:49:28.010249   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:49:28.010375   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/13299.pem -> /usr/share/ca-certificates/13299.pem
	I1202 11:49:28.010450   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284
	I1202 11:49:28.029761   95364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32829 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/ha-093284/id_rsa Username:docker}
	I1202 11:49:28.120617   95364 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1202 11:49:28.124465   95364 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1202 11:49:28.136369   95364 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1202 11:49:28.139691   95364 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1202 11:49:28.151917   95364 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1202 11:49:28.155790   95364 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1202 11:49:28.168753   95364 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1202 11:49:28.172124   95364 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1202 11:49:28.185410   95364 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1202 11:49:28.188742   95364 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1202 11:49:28.202012   95364 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1202 11:49:28.205351   95364 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1202 11:49:28.219740   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:49:28.244322   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:49:28.268342   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:49:28.290865   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 11:49:28.318894   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1202 11:49:28.341852   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 11:49:28.364575   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 11:49:28.393118   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 11:49:28.415123   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/files/etc/ssl/certs/132992.pem --> /usr/share/ca-certificates/132992.pem (1708 bytes)
	I1202 11:49:28.436427   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:49:28.458846   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/certs/13299.pem --> /usr/share/ca-certificates/13299.pem (1338 bytes)
	I1202 11:49:28.481483   95364 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1202 11:49:28.502924   95364 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1202 11:49:28.524207   95364 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1202 11:49:28.544254   95364 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1202 11:49:28.567724   95364 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1202 11:49:28.604555   95364 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1202 11:49:28.632364   95364 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1202 11:49:28.650442   95364 ssh_runner.go:195] Run: openssl version
	I1202 11:49:28.655510   95364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132992.pem && ln -fs /usr/share/ca-certificates/132992.pem /etc/ssl/certs/132992.pem"
	I1202 11:49:28.664204   95364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132992.pem
	I1202 11:49:28.667826   95364 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:39 /usr/share/ca-certificates/132992.pem
	I1202 11:49:28.667875   95364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132992.pem
	I1202 11:49:28.674070   95364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 11:49:28.681877   95364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:49:28.690182   95364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:49:28.693272   95364 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:49:28.693328   95364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:49:28.699539   95364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:49:28.710458   95364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13299.pem && ln -fs /usr/share/ca-certificates/13299.pem /etc/ssl/certs/13299.pem"
	I1202 11:49:28.720636   95364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13299.pem
	I1202 11:49:28.724026   95364 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:39 /usr/share/ca-certificates/13299.pem
	I1202 11:49:28.724095   95364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13299.pem
	I1202 11:49:28.730458   95364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13299.pem /etc/ssl/certs/51391683.0"
	I1202 11:49:28.738672   95364 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:49:28.742013   95364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 11:49:28.748139   95364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 11:49:28.755321   95364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 11:49:28.763087   95364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 11:49:28.769460   95364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 11:49:28.775615   95364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 11:49:28.781743   95364 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.2 crio true true} ...
	I1202 11:49:28.781851   95364 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-093284-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-093284 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:49:28.781878   95364 kube-vip.go:115] generating kube-vip config ...
	I1202 11:49:28.781918   95364 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1202 11:49:28.792945   95364 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1202 11:49:28.793011   95364 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.6
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1202 11:49:28.793054   95364 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:49:28.801131   95364 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 11:49:28.801228   95364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1202 11:49:28.811885   95364 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 11:49:28.831708   95364 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:49:28.847850   95364 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1202 11:49:28.864246   95364 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 11:49:28.867570   95364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:49:28.878249   95364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:49:28.977308   95364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:49:28.988459   95364 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 11:49:28.988738   95364 config.go:182] Loaded profile config "ha-093284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:49:28.990685   95364 out.go:177] * Verifying Kubernetes components...
	I1202 11:49:28.991799   95364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:49:29.089310   95364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:49:29.100946   95364 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20033-6540/kubeconfig
	I1202 11:49:29.101192   95364 kapi.go:59] client config for ha-093284: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/client.crt", KeyFile:"/home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/client.key", CAFile:"/home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 11:49:29.101262   95364 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1202 11:49:29.101470   95364 node_ready.go:35] waiting up to 6m0s for node "ha-093284-m02" to be "Ready" ...
	I1202 11:49:29.101557   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m02
	I1202 11:49:29.101565   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:29.101572   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:29.101577   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:41.819585   95364 round_trippers.go:574] Response Status: 500 Internal Server Error in 12717 milliseconds
	I1202 11:49:41.820140   95364 node_ready.go:53] error getting node "ha-093284-m02": etcdserver: request timed out
	I1202 11:49:41.820305   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m02
	I1202 11:49:41.820345   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:41.820386   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:41.820413   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:46.059199   95364 round_trippers.go:574] Response Status: 200 OK in 4238 milliseconds
	I1202 11:49:46.062386   95364 node_ready.go:49] node "ha-093284-m02" has status "Ready":"True"
	I1202 11:49:46.062419   95364 node_ready.go:38] duration metric: took 16.960931686s for node "ha-093284-m02" to be "Ready" ...
	I1202 11:49:46.062432   95364 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:49:46.062503   95364 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 11:49:46.062517   95364 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 11:49:46.062605   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1202 11:49:46.062613   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:46.062623   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:46.062628   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:46.110843   95364 round_trippers.go:574] Response Status: 200 OK in 48 milliseconds
	I1202 11:49:46.122681   95364 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-k72v5" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:46.122793   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-k72v5
	I1202 11:49:46.122804   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:46.122812   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:46.122816   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:46.125416   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:46.126048   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:49:46.126064   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:46.126072   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:46.126077   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:46.128433   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:46.128962   95364 pod_ready.go:93] pod "coredns-7c65d6cfc9-k72v5" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:46.128988   95364 pod_ready.go:82] duration metric: took 6.27476ms for pod "coredns-7c65d6cfc9-k72v5" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:46.129002   95364 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s9tph" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:46.129083   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:49:46.129091   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:46.129098   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:46.129102   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:46.131331   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:46.131923   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:49:46.131939   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:46.131946   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:46.131950   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:46.133991   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:46.134495   95364 pod_ready.go:93] pod "coredns-7c65d6cfc9-s9tph" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:46.134517   95364 pod_ready.go:82] duration metric: took 5.502749ms for pod "coredns-7c65d6cfc9-s9tph" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:46.134533   95364 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-093284" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:46.134609   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-093284
	I1202 11:49:46.134619   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:46.134630   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:46.134640   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:46.136822   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:46.137404   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:49:46.137418   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:46.137424   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:46.137428   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:46.139457   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:46.139955   95364 pod_ready.go:93] pod "etcd-ha-093284" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:46.139973   95364 pod_ready.go:82] duration metric: took 5.430003ms for pod "etcd-ha-093284" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:46.139983   95364 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-093284-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:46.140050   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-093284-m02
	I1202 11:49:46.140057   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:46.140064   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:46.140068   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:46.142095   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:46.142796   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m02
	I1202 11:49:46.142810   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:46.142817   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:46.142821   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:46.144708   95364 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:49:46.145238   95364 pod_ready.go:93] pod "etcd-ha-093284-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:46.145272   95364 pod_ready.go:82] duration metric: took 5.282321ms for pod "etcd-ha-093284-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:46.145286   95364 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-093284-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:46.145358   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-093284-m03
	I1202 11:49:46.145368   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:46.145377   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:46.145385   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:46.147323   95364 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:49:46.263086   95364 request.go:632] Waited for 115.264765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284-m03
	I1202 11:49:46.263143   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m03
	I1202 11:49:46.263147   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:46.263157   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:46.263166   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:46.265818   95364 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1202 11:49:46.265928   95364 pod_ready.go:98] node "ha-093284-m03" hosting pod "etcd-ha-093284-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-093284-m03": nodes "ha-093284-m03" not found
	I1202 11:49:46.265943   95364 pod_ready.go:82] duration metric: took 120.646065ms for pod "etcd-ha-093284-m03" in "kube-system" namespace to be "Ready" ...
	E1202 11:49:46.265951   95364 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-093284-m03" hosting pod "etcd-ha-093284-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-093284-m03": nodes "ha-093284-m03" not found
	I1202 11:49:46.265971   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-093284" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:46.463392   95364 request.go:632] Waited for 197.348963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-093284
	I1202 11:49:46.463469   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-093284
	I1202 11:49:46.463477   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:46.463488   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:46.463494   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:46.466367   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:46.663422   95364 request.go:632] Waited for 196.38155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:49:46.663492   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:49:46.663499   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:46.663508   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:46.663516   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:46.666536   95364 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:46.667123   95364 pod_ready.go:93] pod "kube-apiserver-ha-093284" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:46.667147   95364 pod_ready.go:82] duration metric: took 401.16565ms for pod "kube-apiserver-ha-093284" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:46.667158   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-093284-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:46.863190   95364 request.go:632] Waited for 195.966731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-093284-m02
	I1202 11:49:46.863261   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-093284-m02
	I1202 11:49:46.863269   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:46.863277   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:46.863283   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:46.866162   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:47.063392   95364 request.go:632] Waited for 196.363402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284-m02
	I1202 11:49:47.063443   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m02
	I1202 11:49:47.063461   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:47.063487   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:47.063496   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:47.066253   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:47.066720   95364 pod_ready.go:93] pod "kube-apiserver-ha-093284-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:47.066741   95364 pod_ready.go:82] duration metric: took 399.574539ms for pod "kube-apiserver-ha-093284-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:47.066751   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-093284-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:47.262689   95364 request.go:632] Waited for 195.865006ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-093284-m03
	I1202 11:49:47.262766   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-093284-m03
	I1202 11:49:47.262784   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:47.262796   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:47.262802   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:47.265072   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:47.463278   95364 request.go:632] Waited for 197.384243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284-m03
	I1202 11:49:47.463383   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m03
	I1202 11:49:47.463395   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:47.463404   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:47.463415   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:47.468434   95364 round_trippers.go:574] Response Status: 404 Not Found in 4 milliseconds
	I1202 11:49:47.468575   95364 pod_ready.go:98] node "ha-093284-m03" hosting pod "kube-apiserver-ha-093284-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-093284-m03": nodes "ha-093284-m03" not found
	I1202 11:49:47.468595   95364 pod_ready.go:82] duration metric: took 401.838244ms for pod "kube-apiserver-ha-093284-m03" in "kube-system" namespace to be "Ready" ...
	E1202 11:49:47.468658   95364 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-093284-m03" hosting pod "kube-apiserver-ha-093284-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-093284-m03": nodes "ha-093284-m03" not found
	I1202 11:49:47.468691   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-093284" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:47.662980   95364 request.go:632] Waited for 194.188993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-093284
	I1202 11:49:47.663075   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-093284
	I1202 11:49:47.663083   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:47.663093   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:47.663099   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:47.665669   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:47.862728   95364 request.go:632] Waited for 196.246499ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:49:47.862832   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:49:47.862846   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:47.862862   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:47.862871   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:47.866728   95364 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:49:47.867270   95364 pod_ready.go:98] node "ha-093284" hosting pod "kube-controller-manager-ha-093284" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-093284" has status "Ready":"False"
	I1202 11:49:47.867306   95364 pod_ready.go:82] duration metric: took 398.592112ms for pod "kube-controller-manager-ha-093284" in "kube-system" namespace to be "Ready" ...
	E1202 11:49:47.867319   95364 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-093284" hosting pod "kube-controller-manager-ha-093284" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-093284" has status "Ready":"False"
	I1202 11:49:47.867333   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-093284-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:48.063300   95364 request.go:632] Waited for 195.869221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-093284-m02
	I1202 11:49:48.063370   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-093284-m02
	I1202 11:49:48.063382   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:48.063395   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:48.063405   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:48.066061   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:48.263274   95364 request.go:632] Waited for 196.373125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284-m02
	I1202 11:49:48.263344   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m02
	I1202 11:49:48.263352   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:48.263359   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:48.263365   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:48.266298   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:48.266956   95364 pod_ready.go:93] pod "kube-controller-manager-ha-093284-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:48.266986   95364 pod_ready.go:82] duration metric: took 399.641326ms for pod "kube-controller-manager-ha-093284-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:48.267003   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-093284-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:48.462929   95364 request.go:632] Waited for 195.826302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-093284-m03
	I1202 11:49:48.463007   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-093284-m03
	I1202 11:49:48.463018   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:48.463029   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:48.463041   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:48.468668   95364 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1202 11:49:48.663369   95364 request.go:632] Waited for 193.864748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284-m03
	I1202 11:49:48.663473   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m03
	I1202 11:49:48.663487   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:48.663497   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:48.663505   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:48.703101   95364 round_trippers.go:574] Response Status: 404 Not Found in 39 milliseconds
	I1202 11:49:48.703579   95364 pod_ready.go:98] node "ha-093284-m03" hosting pod "kube-controller-manager-ha-093284-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-093284-m03": nodes "ha-093284-m03" not found
	I1202 11:49:48.703658   95364 pod_ready.go:82] duration metric: took 436.643647ms for pod "kube-controller-manager-ha-093284-m03" in "kube-system" namespace to be "Ready" ...
	E1202 11:49:48.703685   95364 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-093284-m03" hosting pod "kube-controller-manager-ha-093284-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-093284-m03": nodes "ha-093284-m03" not found
	I1202 11:49:48.703723   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ddc8v" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:48.863255   95364 request.go:632] Waited for 159.392861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ddc8v
	I1202 11:49:48.863309   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ddc8v
	I1202 11:49:48.863314   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:48.863321   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:48.863325   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:48.865969   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:49.062870   95364 request.go:632] Waited for 196.169407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:49:49.062958   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:49:49.062971   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:49.062981   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:49.062993   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:49.065461   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:49.066030   95364 pod_ready.go:98] node "ha-093284" hosting pod "kube-proxy-ddc8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-093284" has status "Ready":"False"
	I1202 11:49:49.066055   95364 pod_ready.go:82] duration metric: took 362.297023ms for pod "kube-proxy-ddc8v" in "kube-system" namespace to be "Ready" ...
	E1202 11:49:49.066075   95364 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-093284" hosting pod "kube-proxy-ddc8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-093284" has status "Ready":"False"
	I1202 11:49:49.066085   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g5zm7" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:49.263003   95364 request.go:632] Waited for 196.827421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5zm7
	I1202 11:49:49.263061   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5zm7
	I1202 11:49:49.263066   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:49.263073   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:49.263077   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:49.265876   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:49.462806   95364 request.go:632] Waited for 196.300902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284-m02
	I1202 11:49:49.462875   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m02
	I1202 11:49:49.462882   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:49.462892   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:49.462900   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:49.465446   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:49.465969   95364 pod_ready.go:93] pod "kube-proxy-g5zm7" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:49.465988   95364 pod_ready.go:82] duration metric: took 399.89341ms for pod "kube-proxy-g5zm7" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:49.466002   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nbwvv" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:49.663140   95364 request.go:632] Waited for 197.062978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nbwvv
	I1202 11:49:49.663230   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nbwvv
	I1202 11:49:49.663240   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:49.663248   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:49.663253   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:49.665860   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:49.862959   95364 request.go:632] Waited for 196.365682ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284-m04
	I1202 11:49:49.863044   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m04
	I1202 11:49:49.863055   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:49.863067   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:49.863078   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:49.865750   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:49.866430   95364 pod_ready.go:93] pod "kube-proxy-nbwvv" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:49.866455   95364 pod_ready.go:82] duration metric: took 400.443258ms for pod "kube-proxy-nbwvv" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:49.866470   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tdjgw" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:50.063322   95364 request.go:632] Waited for 196.777161ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tdjgw
	I1202 11:49:50.063388   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tdjgw
	I1202 11:49:50.063393   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:50.063405   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:50.063412   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:50.066112   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:50.263074   95364 request.go:632] Waited for 196.359104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284-m03
	I1202 11:49:50.263152   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m03
	I1202 11:49:50.263163   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:50.263179   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:50.263188   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:50.265645   95364 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1202 11:49:50.265749   95364 pod_ready.go:98] node "ha-093284-m03" hosting pod "kube-proxy-tdjgw" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-093284-m03": nodes "ha-093284-m03" not found
	I1202 11:49:50.265765   95364 pod_ready.go:82] duration metric: took 399.286452ms for pod "kube-proxy-tdjgw" in "kube-system" namespace to be "Ready" ...
	E1202 11:49:50.265774   95364 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-093284-m03" hosting pod "kube-proxy-tdjgw" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-093284-m03": nodes "ha-093284-m03" not found
	I1202 11:49:50.265783   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-093284" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:50.463149   95364 request.go:632] Waited for 197.283753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-093284
	I1202 11:49:50.463228   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-093284
	I1202 11:49:50.463239   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:50.463251   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:50.463262   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:50.466041   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:50.662978   95364 request.go:632] Waited for 196.389878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:49:50.663034   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:49:50.663039   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:50.663046   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:50.663052   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:50.665736   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:50.666235   95364 pod_ready.go:98] node "ha-093284" hosting pod "kube-scheduler-ha-093284" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-093284" has status "Ready":"False"
	I1202 11:49:50.666257   95364 pod_ready.go:82] duration metric: took 400.462901ms for pod "kube-scheduler-ha-093284" in "kube-system" namespace to be "Ready" ...
	E1202 11:49:50.666266   95364 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-093284" hosting pod "kube-scheduler-ha-093284" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-093284" has status "Ready":"False"
	I1202 11:49:50.666274   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-093284-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:50.863184   95364 request.go:632] Waited for 196.850126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-093284-m02
	I1202 11:49:50.863243   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-093284-m02
	I1202 11:49:50.863248   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:50.863255   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:50.863261   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:50.865982   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:51.062979   95364 request.go:632] Waited for 196.348057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284-m02
	I1202 11:49:51.063052   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m02
	I1202 11:49:51.063056   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:51.063065   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:51.063075   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:51.065711   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:51.066177   95364 pod_ready.go:93] pod "kube-scheduler-ha-093284-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:49:51.066195   95364 pod_ready.go:82] duration metric: took 399.914083ms for pod "kube-scheduler-ha-093284-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:51.066207   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-093284-m03" in "kube-system" namespace to be "Ready" ...
	I1202 11:49:51.263312   95364 request.go:632] Waited for 197.02577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-093284-m03
	I1202 11:49:51.263415   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-093284-m03
	I1202 11:49:51.263425   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:51.263433   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:51.263440   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:51.266152   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:49:51.463031   95364 request.go:632] Waited for 196.313706ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284-m03
	I1202 11:49:51.463107   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m03
	I1202 11:49:51.463113   95364 round_trippers.go:469] Request Headers:
	I1202 11:49:51.463121   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:49:51.463129   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:49:51.465479   95364 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1202 11:49:51.465600   95364 pod_ready.go:98] node "ha-093284-m03" hosting pod "kube-scheduler-ha-093284-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-093284-m03": nodes "ha-093284-m03" not found
	I1202 11:49:51.465622   95364 pod_ready.go:82] duration metric: took 399.404562ms for pod "kube-scheduler-ha-093284-m03" in "kube-system" namespace to be "Ready" ...
	E1202 11:49:51.465635   95364 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-093284-m03" hosting pod "kube-scheduler-ha-093284-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-093284-m03": nodes "ha-093284-m03" not found
	I1202 11:49:51.465650   95364 pod_ready.go:39] duration metric: took 5.403205105s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:49:51.465669   95364 api_server.go:52] waiting for apiserver process to appear ...
	I1202 11:49:51.465726   95364 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 11:49:51.476948   95364 api_server.go:72] duration metric: took 22.488438328s to wait for apiserver process to appear ...
	I1202 11:49:51.476974   95364 api_server.go:88] waiting for apiserver healthz status ...
	I1202 11:49:51.476992   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:49:51.480757   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:49:51.480781   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:49:51.977488   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:49:51.982892   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:49:51.982925   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:49:52.477531   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:49:52.482875   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:49:52.482914   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:49:52.977190   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:49:52.980865   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:49:52.980888   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:49:53.477418   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:49:53.481203   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:49:53.481229   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:49:53.977905   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:49:53.981692   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:49:53.981720   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:49:54.477243   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:49:54.481686   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:49:54.481716   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:49:54.977205   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:49:54.980838   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:49:54.980871   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:49:55.477459   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:49:55.481375   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:49:55.481400   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:49:55.977963   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:49:55.981668   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:49:55.981698   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:49:56.477201   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:49:56.480989   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:49:56.481017   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:49:56.977532   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:49:56.981083   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:49:56.981109   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:49:57.477427   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:49:57.481030   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:49:57.481059   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:49:57.977428   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:49:57.981145   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:49:57.981176   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:49:58.477411   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:49:58.481466   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:49:58.481496   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:49:58.977122   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:49:58.980820   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:49:58.980852   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:49:59.477125   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:49:59.481001   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:49:59.481024   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:49:59.977557   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:49:59.981766   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:49:59.981789   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:00.477525   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:00.481687   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:00.481713   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:00.977232   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:00.981216   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:00.981255   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:01.477446   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:01.481420   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:01.481450   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:01.977543   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:01.983464   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:01.983498   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:02.478060   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:02.482018   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:02.482049   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:02.977438   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:02.981189   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:02.981216   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:03.477827   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:03.481767   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:03.481796   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:03.977397   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:03.982207   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:03.982249   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:04.477486   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:04.481748   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:04.481777   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:04.977283   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:04.981043   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:04.981077   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:05.477446   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:05.481361   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:05.481387   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:05.977999   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:05.982719   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:05.982747   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:06.477259   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:06.481872   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:06.481902   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:06.977403   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:06.981123   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:06.981150   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:07.477444   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:07.481108   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:07.481152   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:07.977459   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:07.981321   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:07.981348   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:08.477950   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:08.481825   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:08.481859   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:08.977536   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:08.981918   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:08.981944   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:09.477521   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:09.482684   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:09.482711   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:09.977876   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:09.982138   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:09.982220   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:10.477903   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:10.483303   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:10.483332   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:10.977479   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:10.981272   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:10.981298   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:11.477948   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:11.483850   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:11.483886   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:11.977463   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:12.000378   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:12.000412   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:12.477976   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:12.482301   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:12.482326   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:12.978081   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:12.982449   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:12.982474   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:13.478085   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:13.481823   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:13.481857   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:13.977452   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:13.982510   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:13.982542   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:14.477042   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:14.480849   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:14.480875   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:14.977212   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:14.981417   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:14.981446   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:15.478054   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:15.481925   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:15.481949   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:15.977398   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:15.981772   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:15.981808   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:16.477272   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:16.481110   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:16.481135   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:16.977445   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:16.981474   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:16.981504   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:17.477034   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:17.480870   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:17.480897   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:17.977445   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:17.981988   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:17.982012   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:18.477096   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:18.480822   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:18.480845   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:18.977466   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:18.981878   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:18.981912   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:19.477450   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:19.481207   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:19.481243   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:19.978091   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:19.982540   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:19.982572   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:20.477431   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:20.481487   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:20.481513   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:20.977047   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:20.980932   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:20.980958   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:21.477037   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:21.481290   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:21.481315   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:21.977470   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:21.982999   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:21.983029   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:22.477573   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:22.481575   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:22.481609   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:22.977113   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:22.980629   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:22.980655   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:23.477204   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:23.481150   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:23.481171   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:23.977532   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:23.981319   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:23.981352   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:24.477960   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:24.481825   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:24.481850   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:24.977366   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:24.981226   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:24.981250   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:25.477424   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:25.481111   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:25.481138   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:25.977454   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:25.981348   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:25.981381   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:26.477072   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:26.481097   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:26.481125   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:26.977480   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:26.981170   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:26.981200   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:27.477493   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:27.481255   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:27.481289   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:27.977436   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:27.981258   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:27.981283   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:28.477854   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:28.481437   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 11:50:28.481463   95364 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 11:50:28.977301   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:28.977731   95364 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused
	I1202 11:50:29.477324   95364 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 11:50:29.477400   95364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 11:50:29.513082   95364 cri.go:89] found id: "8ea915c52db78e2ab649ae0f4b946c199c686ab6d7759eb3fc1453b29bd74146"
	I1202 11:50:29.513105   95364 cri.go:89] found id: "30654d80a8d33630d6635fb612b893e6131bd80ce1c6b9ba164a38169a255af0"
	I1202 11:50:29.513111   95364 cri.go:89] found id: ""
	I1202 11:50:29.513124   95364 logs.go:282] 2 containers: [8ea915c52db78e2ab649ae0f4b946c199c686ab6d7759eb3fc1453b29bd74146 30654d80a8d33630d6635fb612b893e6131bd80ce1c6b9ba164a38169a255af0]
	I1202 11:50:29.513176   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:29.516606   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:29.520131   95364 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 11:50:29.520212   95364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 11:50:29.556050   95364 cri.go:89] found id: "9355d9d8ac7047c48ed7be52be3e5d8db5c5e0f3bef79c4b90351a0bab7e724d"
	I1202 11:50:29.556074   95364 cri.go:89] found id: "d633b6b4e0357c4829104e73e4ecb34e1249aa1572cc5410bb9cd74943a07868"
	I1202 11:50:29.556081   95364 cri.go:89] found id: ""
	I1202 11:50:29.556089   95364 logs.go:282] 2 containers: [9355d9d8ac7047c48ed7be52be3e5d8db5c5e0f3bef79c4b90351a0bab7e724d d633b6b4e0357c4829104e73e4ecb34e1249aa1572cc5410bb9cd74943a07868]
	I1202 11:50:29.556146   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:29.559537   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:29.562855   95364 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 11:50:29.562904   95364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 11:50:29.596784   95364 cri.go:89] found id: ""
	I1202 11:50:29.596806   95364 logs.go:282] 0 containers: []
	W1202 11:50:29.596814   95364 logs.go:284] No container was found matching "coredns"
	I1202 11:50:29.596821   95364 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 11:50:29.596879   95364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 11:50:29.630315   95364 cri.go:89] found id: "c30bd3f2b3b1e5584d2befae9eaf0d47f239db9bc2eeb36c3847b010475cf578"
	I1202 11:50:29.630343   95364 cri.go:89] found id: "d9288b6f265cd4e997770b48424b3e0a844da3b4f14dbec630d6a19afe915f9f"
	I1202 11:50:29.630347   95364 cri.go:89] found id: ""
	I1202 11:50:29.630356   95364 logs.go:282] 2 containers: [c30bd3f2b3b1e5584d2befae9eaf0d47f239db9bc2eeb36c3847b010475cf578 d9288b6f265cd4e997770b48424b3e0a844da3b4f14dbec630d6a19afe915f9f]
	I1202 11:50:29.630410   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:29.634118   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:29.637614   95364 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 11:50:29.637675   95364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 11:50:29.678980   95364 cri.go:89] found id: "189541cd6573ff3f517898cbf6a6ee6e2f705af2d1e9ffa455fcfa6c81c450cf"
	I1202 11:50:29.678999   95364 cri.go:89] found id: ""
	I1202 11:50:29.679007   95364 logs.go:282] 1 containers: [189541cd6573ff3f517898cbf6a6ee6e2f705af2d1e9ffa455fcfa6c81c450cf]
	I1202 11:50:29.679066   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:29.682685   95364 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 11:50:29.682742   95364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 11:50:29.716328   95364 cri.go:89] found id: "daaef531191780f1c0e70ffbcd2d3fb3632c097846aab5967383affdf3abf3a7"
	I1202 11:50:29.716353   95364 cri.go:89] found id: "3d5c931cb2e8eae4bd30fdd2fc3d7e4814475dc13c55dffebf54fae31871371a"
	I1202 11:50:29.716359   95364 cri.go:89] found id: ""
	I1202 11:50:29.716366   95364 logs.go:282] 2 containers: [daaef531191780f1c0e70ffbcd2d3fb3632c097846aab5967383affdf3abf3a7 3d5c931cb2e8eae4bd30fdd2fc3d7e4814475dc13c55dffebf54fae31871371a]
	I1202 11:50:29.716422   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:29.719831   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:29.722980   95364 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 11:50:29.723037   95364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 11:50:29.761874   95364 cri.go:89] found id: "073aa670f1aba27ee1c916e7c5b83866a1060f979dddd3a19a35ac96a6dace6d"
	I1202 11:50:29.761899   95364 cri.go:89] found id: ""
	I1202 11:50:29.761909   95364 logs.go:282] 1 containers: [073aa670f1aba27ee1c916e7c5b83866a1060f979dddd3a19a35ac96a6dace6d]
	I1202 11:50:29.761963   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:29.765875   95364 logs.go:123] Gathering logs for dmesg ...
	I1202 11:50:29.765903   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 11:50:29.782879   95364 logs.go:123] Gathering logs for etcd [d633b6b4e0357c4829104e73e4ecb34e1249aa1572cc5410bb9cd74943a07868] ...
	I1202 11:50:29.782910   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d633b6b4e0357c4829104e73e4ecb34e1249aa1572cc5410bb9cd74943a07868"
	I1202 11:50:29.837685   95364 logs.go:123] Gathering logs for kube-controller-manager [3d5c931cb2e8eae4bd30fdd2fc3d7e4814475dc13c55dffebf54fae31871371a] ...
	I1202 11:50:29.837717   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d5c931cb2e8eae4bd30fdd2fc3d7e4814475dc13c55dffebf54fae31871371a"
	I1202 11:50:29.876138   95364 logs.go:123] Gathering logs for kindnet [073aa670f1aba27ee1c916e7c5b83866a1060f979dddd3a19a35ac96a6dace6d] ...
	I1202 11:50:29.876163   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 073aa670f1aba27ee1c916e7c5b83866a1060f979dddd3a19a35ac96a6dace6d"
	I1202 11:50:29.917571   95364 logs.go:123] Gathering logs for CRI-O ...
	I1202 11:50:29.917599   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 11:50:29.992532   95364 logs.go:123] Gathering logs for kubelet ...
	I1202 11:50:29.992568   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 11:50:30.060326   95364 logs.go:123] Gathering logs for describe nodes ...
	I1202 11:50:30.060368   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 11:50:30.332633   95364 logs.go:123] Gathering logs for kube-scheduler [d9288b6f265cd4e997770b48424b3e0a844da3b4f14dbec630d6a19afe915f9f] ...
	I1202 11:50:30.332667   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9288b6f265cd4e997770b48424b3e0a844da3b4f14dbec630d6a19afe915f9f"
	I1202 11:50:30.365542   95364 logs.go:123] Gathering logs for kube-proxy [189541cd6573ff3f517898cbf6a6ee6e2f705af2d1e9ffa455fcfa6c81c450cf] ...
	I1202 11:50:30.365574   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 189541cd6573ff3f517898cbf6a6ee6e2f705af2d1e9ffa455fcfa6c81c450cf"
	I1202 11:50:30.398900   95364 logs.go:123] Gathering logs for kube-controller-manager [daaef531191780f1c0e70ffbcd2d3fb3632c097846aab5967383affdf3abf3a7] ...
	I1202 11:50:30.398932   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daaef531191780f1c0e70ffbcd2d3fb3632c097846aab5967383affdf3abf3a7"
	I1202 11:50:30.465387   95364 logs.go:123] Gathering logs for kube-apiserver [30654d80a8d33630d6635fb612b893e6131bd80ce1c6b9ba164a38169a255af0] ...
	I1202 11:50:30.465432   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30654d80a8d33630d6635fb612b893e6131bd80ce1c6b9ba164a38169a255af0"
	I1202 11:50:30.500630   95364 logs.go:123] Gathering logs for etcd [9355d9d8ac7047c48ed7be52be3e5d8db5c5e0f3bef79c4b90351a0bab7e724d] ...
	I1202 11:50:30.500665   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9355d9d8ac7047c48ed7be52be3e5d8db5c5e0f3bef79c4b90351a0bab7e724d"
	I1202 11:50:30.547218   95364 logs.go:123] Gathering logs for container status ...
	I1202 11:50:30.547251   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 11:50:30.590086   95364 logs.go:123] Gathering logs for kube-apiserver [8ea915c52db78e2ab649ae0f4b946c199c686ab6d7759eb3fc1453b29bd74146] ...
	I1202 11:50:30.590130   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ea915c52db78e2ab649ae0f4b946c199c686ab6d7759eb3fc1453b29bd74146"
	I1202 11:50:30.647065   95364 logs.go:123] Gathering logs for kube-scheduler [c30bd3f2b3b1e5584d2befae9eaf0d47f239db9bc2eeb36c3847b010475cf578] ...
	I1202 11:50:30.647098   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c30bd3f2b3b1e5584d2befae9eaf0d47f239db9bc2eeb36c3847b010475cf578"
	I1202 11:50:33.181753   95364 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1202 11:50:33.187280   95364 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1202 11:50:33.187376   95364 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I1202 11:50:33.187385   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:33.187392   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:33.187396   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:33.193044   95364 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1202 11:50:33.193192   95364 api_server.go:141] control plane version: v1.31.2
	I1202 11:50:33.193228   95364 api_server.go:131] duration metric: took 41.716235323s to wait for apiserver health ...
	I1202 11:50:33.193242   95364 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 11:50:33.193272   95364 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 11:50:33.193338   95364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 11:50:33.227886   95364 cri.go:89] found id: "8ea915c52db78e2ab649ae0f4b946c199c686ab6d7759eb3fc1453b29bd74146"
	I1202 11:50:33.227908   95364 cri.go:89] found id: "30654d80a8d33630d6635fb612b893e6131bd80ce1c6b9ba164a38169a255af0"
	I1202 11:50:33.227913   95364 cri.go:89] found id: ""
	I1202 11:50:33.227921   95364 logs.go:282] 2 containers: [8ea915c52db78e2ab649ae0f4b946c199c686ab6d7759eb3fc1453b29bd74146 30654d80a8d33630d6635fb612b893e6131bd80ce1c6b9ba164a38169a255af0]
	I1202 11:50:33.227970   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:33.231421   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:33.234405   95364 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 11:50:33.234458   95364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 11:50:33.268291   95364 cri.go:89] found id: "9355d9d8ac7047c48ed7be52be3e5d8db5c5e0f3bef79c4b90351a0bab7e724d"
	I1202 11:50:33.268317   95364 cri.go:89] found id: "d633b6b4e0357c4829104e73e4ecb34e1249aa1572cc5410bb9cd74943a07868"
	I1202 11:50:33.268322   95364 cri.go:89] found id: ""
	I1202 11:50:33.268331   95364 logs.go:282] 2 containers: [9355d9d8ac7047c48ed7be52be3e5d8db5c5e0f3bef79c4b90351a0bab7e724d d633b6b4e0357c4829104e73e4ecb34e1249aa1572cc5410bb9cd74943a07868]
	I1202 11:50:33.268396   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:33.271783   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:33.274913   95364 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 11:50:33.274970   95364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 11:50:33.306863   95364 cri.go:89] found id: ""
	I1202 11:50:33.306890   95364 logs.go:282] 0 containers: []
	W1202 11:50:33.306900   95364 logs.go:284] No container was found matching "coredns"
	I1202 11:50:33.306907   95364 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 11:50:33.306965   95364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 11:50:33.339417   95364 cri.go:89] found id: "c30bd3f2b3b1e5584d2befae9eaf0d47f239db9bc2eeb36c3847b010475cf578"
	I1202 11:50:33.339438   95364 cri.go:89] found id: "d9288b6f265cd4e997770b48424b3e0a844da3b4f14dbec630d6a19afe915f9f"
	I1202 11:50:33.339443   95364 cri.go:89] found id: ""
	I1202 11:50:33.339450   95364 logs.go:282] 2 containers: [c30bd3f2b3b1e5584d2befae9eaf0d47f239db9bc2eeb36c3847b010475cf578 d9288b6f265cd4e997770b48424b3e0a844da3b4f14dbec630d6a19afe915f9f]
	I1202 11:50:33.339497   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:33.342844   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:33.346183   95364 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 11:50:33.346248   95364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 11:50:33.379125   95364 cri.go:89] found id: "189541cd6573ff3f517898cbf6a6ee6e2f705af2d1e9ffa455fcfa6c81c450cf"
	I1202 11:50:33.379149   95364 cri.go:89] found id: ""
	I1202 11:50:33.379158   95364 logs.go:282] 1 containers: [189541cd6573ff3f517898cbf6a6ee6e2f705af2d1e9ffa455fcfa6c81c450cf]
	I1202 11:50:33.379204   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:33.382549   95364 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 11:50:33.382617   95364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 11:50:33.415872   95364 cri.go:89] found id: "daaef531191780f1c0e70ffbcd2d3fb3632c097846aab5967383affdf3abf3a7"
	I1202 11:50:33.415917   95364 cri.go:89] found id: "3d5c931cb2e8eae4bd30fdd2fc3d7e4814475dc13c55dffebf54fae31871371a"
	I1202 11:50:33.415923   95364 cri.go:89] found id: ""
	I1202 11:50:33.415929   95364 logs.go:282] 2 containers: [daaef531191780f1c0e70ffbcd2d3fb3632c097846aab5967383affdf3abf3a7 3d5c931cb2e8eae4bd30fdd2fc3d7e4814475dc13c55dffebf54fae31871371a]
	I1202 11:50:33.415971   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:33.419288   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:33.422361   95364 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 11:50:33.422424   95364 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 11:50:33.454075   95364 cri.go:89] found id: "073aa670f1aba27ee1c916e7c5b83866a1060f979dddd3a19a35ac96a6dace6d"
	I1202 11:50:33.454097   95364 cri.go:89] found id: ""
	I1202 11:50:33.454105   95364 logs.go:282] 1 containers: [073aa670f1aba27ee1c916e7c5b83866a1060f979dddd3a19a35ac96a6dace6d]
	I1202 11:50:33.454146   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:33.457516   95364 logs.go:123] Gathering logs for kubelet ...
	I1202 11:50:33.457540   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 11:50:33.514726   95364 logs.go:123] Gathering logs for dmesg ...
	I1202 11:50:33.514764   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 11:50:33.529132   95364 logs.go:123] Gathering logs for CRI-O ...
	I1202 11:50:33.529160   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 11:50:33.585321   95364 logs.go:123] Gathering logs for kube-apiserver [30654d80a8d33630d6635fb612b893e6131bd80ce1c6b9ba164a38169a255af0] ...
	I1202 11:50:33.585353   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30654d80a8d33630d6635fb612b893e6131bd80ce1c6b9ba164a38169a255af0"
	I1202 11:50:33.618887   95364 logs.go:123] Gathering logs for kube-scheduler [c30bd3f2b3b1e5584d2befae9eaf0d47f239db9bc2eeb36c3847b010475cf578] ...
	I1202 11:50:33.618915   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c30bd3f2b3b1e5584d2befae9eaf0d47f239db9bc2eeb36c3847b010475cf578"
	I1202 11:50:33.652678   95364 logs.go:123] Gathering logs for kube-proxy [189541cd6573ff3f517898cbf6a6ee6e2f705af2d1e9ffa455fcfa6c81c450cf] ...
	I1202 11:50:33.652706   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 189541cd6573ff3f517898cbf6a6ee6e2f705af2d1e9ffa455fcfa6c81c450cf"
	I1202 11:50:33.686902   95364 logs.go:123] Gathering logs for kube-controller-manager [3d5c931cb2e8eae4bd30fdd2fc3d7e4814475dc13c55dffebf54fae31871371a] ...
	I1202 11:50:33.686933   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d5c931cb2e8eae4bd30fdd2fc3d7e4814475dc13c55dffebf54fae31871371a"
	I1202 11:50:33.719524   95364 logs.go:123] Gathering logs for describe nodes ...
	I1202 11:50:33.719562   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1202 11:50:33.939577   95364 logs.go:123] Gathering logs for etcd [9355d9d8ac7047c48ed7be52be3e5d8db5c5e0f3bef79c4b90351a0bab7e724d] ...
	I1202 11:50:33.939611   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9355d9d8ac7047c48ed7be52be3e5d8db5c5e0f3bef79c4b90351a0bab7e724d"
	I1202 11:50:33.980325   95364 logs.go:123] Gathering logs for etcd [d633b6b4e0357c4829104e73e4ecb34e1249aa1572cc5410bb9cd74943a07868] ...
	I1202 11:50:33.980362   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d633b6b4e0357c4829104e73e4ecb34e1249aa1572cc5410bb9cd74943a07868"
	I1202 11:50:34.024196   95364 logs.go:123] Gathering logs for kube-controller-manager [daaef531191780f1c0e70ffbcd2d3fb3632c097846aab5967383affdf3abf3a7] ...
	I1202 11:50:34.024228   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daaef531191780f1c0e70ffbcd2d3fb3632c097846aab5967383affdf3abf3a7"
	I1202 11:50:34.074392   95364 logs.go:123] Gathering logs for container status ...
	I1202 11:50:34.074422   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 11:50:34.111723   95364 logs.go:123] Gathering logs for kube-apiserver [8ea915c52db78e2ab649ae0f4b946c199c686ab6d7759eb3fc1453b29bd74146] ...
	I1202 11:50:34.111748   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ea915c52db78e2ab649ae0f4b946c199c686ab6d7759eb3fc1453b29bd74146"
	I1202 11:50:34.151186   95364 logs.go:123] Gathering logs for kube-scheduler [d9288b6f265cd4e997770b48424b3e0a844da3b4f14dbec630d6a19afe915f9f] ...
	I1202 11:50:34.151213   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9288b6f265cd4e997770b48424b3e0a844da3b4f14dbec630d6a19afe915f9f"
	I1202 11:50:34.183774   95364 logs.go:123] Gathering logs for kindnet [073aa670f1aba27ee1c916e7c5b83866a1060f979dddd3a19a35ac96a6dace6d] ...
	I1202 11:50:34.183803   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 073aa670f1aba27ee1c916e7c5b83866a1060f979dddd3a19a35ac96a6dace6d"
	I1202 11:50:36.719481   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1202 11:50:36.719506   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:36.719514   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:36.719518   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:36.725979   95364 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1202 11:50:36.731386   95364 system_pods.go:59] 19 kube-system pods found
	I1202 11:50:36.731429   95364 system_pods.go:61] "coredns-7c65d6cfc9-k72v5" [71dc7af8-7f49-421f-852b-6df436e833aa] Running
	I1202 11:50:36.731440   95364 system_pods.go:61] "coredns-7c65d6cfc9-s9tph" [1f92c59a-36b8-41aa-bb21-736dc11c748d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 11:50:36.731448   95364 system_pods.go:61] "etcd-ha-093284" [69ec5840-d438-4d5c-a15e-67e32293688e] Running
	I1202 11:50:36.731455   95364 system_pods.go:61] "etcd-ha-093284-m02" [91996de6-dc9e-4c5f-869a-93001f618fc7] Running
	I1202 11:50:36.731458   95364 system_pods.go:61] "kindnet-6z757" [deddbecb-6345-4a63-9f1c-91de296322df] Running
	I1202 11:50:36.731462   95364 system_pods.go:61] "kindnet-7mpq6" [1eecb197-743a-4a85-9095-c1d2c876c27e] Running
	I1202 11:50:36.731465   95364 system_pods.go:61] "kindnet-qtflb" [7174fd89-4a10-4b45-94f5-36228b0240b8] Running
	I1202 11:50:36.731470   95364 system_pods.go:61] "kube-apiserver-ha-093284" [0acdeade-2250-4711-b035-75b1f0f50ae5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 11:50:36.731479   95364 system_pods.go:61] "kube-apiserver-ha-093284-m02" [7d6f9dc9-3765-4905-bf34-1f2e7b97af93] Running
	I1202 11:50:36.731488   95364 system_pods.go:61] "kube-controller-manager-ha-093284" [860326b6-07dd-4c5c-92da-f231c3250344] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 11:50:36.731495   95364 system_pods.go:61] "kube-controller-manager-ha-093284-m02" [13e92a46-4043-4265-be4a-124b68855f07] Running
	I1202 11:50:36.731501   95364 system_pods.go:61] "kube-proxy-ddc8v" [dab494ac-ce67-44d6-ad43-f0448f755162] Running
	I1202 11:50:36.731506   95364 system_pods.go:61] "kube-proxy-g5zm7" [2b2372e6-f424-40e5-a335-2763afc4dcea] Running
	I1202 11:50:36.731510   95364 system_pods.go:61] "kube-proxy-nbwvv" [fa8a1e97-9b35-42cd-8a4e-6c81b879963c] Running
	I1202 11:50:36.731513   95364 system_pods.go:61] "kube-scheduler-ha-093284" [8e069110-ae83-4115-9c26-5cb833e6c879] Running
	I1202 11:50:36.731517   95364 system_pods.go:61] "kube-scheduler-ha-093284-m02" [7e14b79d-cac0-4226-8537-80d288b8d47e] Running
	I1202 11:50:36.731522   95364 system_pods.go:61] "kube-vip-ha-093284" [d95565ad-b082-461e-a69f-ad8aff868999] Running
	I1202 11:50:36.731525   95364 system_pods.go:61] "kube-vip-ha-093284-m02" [15892890-5b5b-40c0-af4e-b764d5ea9071] Running
	I1202 11:50:36.731528   95364 system_pods.go:61] "storage-provisioner" [7a9b0f4d-0bc1-4e4b-9b24-f9b8547d8a92] Running
	I1202 11:50:36.731533   95364 system_pods.go:74] duration metric: took 3.538282403s to wait for pod list to return data ...
	I1202 11:50:36.731543   95364 default_sa.go:34] waiting for default service account to be created ...
	I1202 11:50:36.731622   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1202 11:50:36.731629   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:36.731635   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:36.731640   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:36.734602   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:36.734818   95364 default_sa.go:45] found service account: "default"
	I1202 11:50:36.734832   95364 default_sa.go:55] duration metric: took 3.283835ms for default service account to be created ...
	I1202 11:50:36.734841   95364 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 11:50:36.734897   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1202 11:50:36.734904   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:36.734911   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:36.734915   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:36.738800   95364 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:50:36.744213   95364 system_pods.go:86] 19 kube-system pods found
	I1202 11:50:36.744240   95364 system_pods.go:89] "coredns-7c65d6cfc9-k72v5" [71dc7af8-7f49-421f-852b-6df436e833aa] Running
	I1202 11:50:36.744248   95364 system_pods.go:89] "coredns-7c65d6cfc9-s9tph" [1f92c59a-36b8-41aa-bb21-736dc11c748d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 11:50:36.744254   95364 system_pods.go:89] "etcd-ha-093284" [69ec5840-d438-4d5c-a15e-67e32293688e] Running
	I1202 11:50:36.744261   95364 system_pods.go:89] "etcd-ha-093284-m02" [91996de6-dc9e-4c5f-869a-93001f618fc7] Running
	I1202 11:50:36.744280   95364 system_pods.go:89] "kindnet-6z757" [deddbecb-6345-4a63-9f1c-91de296322df] Running
	I1202 11:50:36.744287   95364 system_pods.go:89] "kindnet-7mpq6" [1eecb197-743a-4a85-9095-c1d2c876c27e] Running
	I1202 11:50:36.744297   95364 system_pods.go:89] "kindnet-qtflb" [7174fd89-4a10-4b45-94f5-36228b0240b8] Running
	I1202 11:50:36.744309   95364 system_pods.go:89] "kube-apiserver-ha-093284" [0acdeade-2250-4711-b035-75b1f0f50ae5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 11:50:36.744321   95364 system_pods.go:89] "kube-apiserver-ha-093284-m02" [7d6f9dc9-3765-4905-bf34-1f2e7b97af93] Running
	I1202 11:50:36.744330   95364 system_pods.go:89] "kube-controller-manager-ha-093284" [860326b6-07dd-4c5c-92da-f231c3250344] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 11:50:36.744339   95364 system_pods.go:89] "kube-controller-manager-ha-093284-m02" [13e92a46-4043-4265-be4a-124b68855f07] Running
	I1202 11:50:36.744349   95364 system_pods.go:89] "kube-proxy-ddc8v" [dab494ac-ce67-44d6-ad43-f0448f755162] Running
	I1202 11:50:36.744353   95364 system_pods.go:89] "kube-proxy-g5zm7" [2b2372e6-f424-40e5-a335-2763afc4dcea] Running
	I1202 11:50:36.744358   95364 system_pods.go:89] "kube-proxy-nbwvv" [fa8a1e97-9b35-42cd-8a4e-6c81b879963c] Running
	I1202 11:50:36.744367   95364 system_pods.go:89] "kube-scheduler-ha-093284" [8e069110-ae83-4115-9c26-5cb833e6c879] Running
	I1202 11:50:36.744377   95364 system_pods.go:89] "kube-scheduler-ha-093284-m02" [7e14b79d-cac0-4226-8537-80d288b8d47e] Running
	I1202 11:50:36.744383   95364 system_pods.go:89] "kube-vip-ha-093284" [d95565ad-b082-461e-a69f-ad8aff868999] Running
	I1202 11:50:36.744392   95364 system_pods.go:89] "kube-vip-ha-093284-m02" [15892890-5b5b-40c0-af4e-b764d5ea9071] Running
	I1202 11:50:36.744400   95364 system_pods.go:89] "storage-provisioner" [7a9b0f4d-0bc1-4e4b-9b24-f9b8547d8a92] Running
	I1202 11:50:36.744408   95364 system_pods.go:126] duration metric: took 9.561699ms to wait for k8s-apps to be running ...
	I1202 11:50:36.744431   95364 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 11:50:36.744490   95364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:50:36.757547   95364 system_svc.go:56] duration metric: took 13.116281ms WaitForService to wait for kubelet
	I1202 11:50:36.757584   95364 kubeadm.go:582] duration metric: took 1m7.769077618s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:50:36.757607   95364 node_conditions.go:102] verifying NodePressure condition ...
	I1202 11:50:36.757699   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I1202 11:50:36.757710   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:36.757721   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:36.757730   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:36.761620   95364 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:50:36.762964   95364 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 11:50:36.762996   95364 node_conditions.go:123] node cpu capacity is 8
	I1202 11:50:36.763012   95364 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 11:50:36.763018   95364 node_conditions.go:123] node cpu capacity is 8
	I1202 11:50:36.763024   95364 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 11:50:36.763029   95364 node_conditions.go:123] node cpu capacity is 8
	I1202 11:50:36.763035   95364 node_conditions.go:105] duration metric: took 5.421943ms to run NodePressure ...
	I1202 11:50:36.763054   95364 start.go:241] waiting for startup goroutines ...
	I1202 11:50:36.763080   95364 start.go:255] writing updated cluster config ...
	I1202 11:50:36.765552   95364 out.go:201] 
	I1202 11:50:36.767275   95364 config.go:182] Loaded profile config "ha-093284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:50:36.767372   95364 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/config.json ...
	I1202 11:50:36.769240   95364 out.go:177] * Starting "ha-093284-m04" worker node in "ha-093284" cluster
	I1202 11:50:36.770707   95364 cache.go:121] Beginning downloading kic base image for docker with crio
	I1202 11:50:36.772310   95364 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1202 11:50:36.773687   95364 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:50:36.773716   95364 cache.go:56] Caching tarball of preloaded images
	I1202 11:50:36.773729   95364 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1202 11:50:36.773839   95364 preload.go:172] Found /home/jenkins/minikube-integration/20033-6540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 11:50:36.773856   95364 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:50:36.773961   95364 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/config.json ...
	I1202 11:50:36.793710   95364 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
	I1202 11:50:36.793733   95364 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
	I1202 11:50:36.793755   95364 cache.go:194] Successfully downloaded all kic artifacts
	I1202 11:50:36.793789   95364 start.go:360] acquireMachinesLock for ha-093284-m04: {Name:mkd21fbfaf70891e089d65a97ceb8dc57c92c199 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 11:50:36.793854   95364 start.go:364] duration metric: took 45.885µs to acquireMachinesLock for "ha-093284-m04"
	I1202 11:50:36.793878   95364 start.go:96] Skipping create...Using existing machine configuration
	I1202 11:50:36.793889   95364 fix.go:54] fixHost starting: m04
	I1202 11:50:36.794104   95364 cli_runner.go:164] Run: docker container inspect ha-093284-m04 --format={{.State.Status}}
	I1202 11:50:36.811782   95364 fix.go:112] recreateIfNeeded on ha-093284-m04: state=Stopped err=<nil>
	W1202 11:50:36.811814   95364 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 11:50:36.814034   95364 out.go:177] * Restarting existing docker container for "ha-093284-m04" ...
	I1202 11:50:36.815548   95364 cli_runner.go:164] Run: docker start ha-093284-m04
	I1202 11:50:37.090300   95364 cli_runner.go:164] Run: docker container inspect ha-093284-m04 --format={{.State.Status}}
	I1202 11:50:37.107531   95364 kic.go:430] container "ha-093284-m04" state is running.
	I1202 11:50:37.107908   95364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-093284-m04
	I1202 11:50:37.126286   95364 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/config.json ...
	I1202 11:50:37.126561   95364 machine.go:93] provisionDockerMachine start ...
	I1202 11:50:37.126622   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284-m04
	I1202 11:50:37.145336   95364 main.go:141] libmachine: Using SSH client type: native
	I1202 11:50:37.145577   95364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32839 <nil> <nil>}
	I1202 11:50:37.145598   95364 main.go:141] libmachine: About to run SSH command:
	hostname
	I1202 11:50:37.146232   95364 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56708->127.0.0.1:32839: read: connection reset by peer
	I1202 11:50:40.275769   95364 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-093284-m04
	
	I1202 11:50:40.275803   95364 ubuntu.go:169] provisioning hostname "ha-093284-m04"
	I1202 11:50:40.275862   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284-m04
	I1202 11:50:40.295032   95364 main.go:141] libmachine: Using SSH client type: native
	I1202 11:50:40.295240   95364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32839 <nil> <nil>}
	I1202 11:50:40.295267   95364 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-093284-m04 && echo "ha-093284-m04" | sudo tee /etc/hostname
	I1202 11:50:40.435447   95364 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-093284-m04
	
	I1202 11:50:40.435529   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284-m04
	I1202 11:50:40.452839   95364 main.go:141] libmachine: Using SSH client type: native
	I1202 11:50:40.453013   95364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32839 <nil> <nil>}
	I1202 11:50:40.453030   95364 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-093284-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-093284-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-093284-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 11:50:40.584392   95364 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1202 11:50:40.584419   95364 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6540/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6540/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6540/.minikube}
	I1202 11:50:40.584439   95364 ubuntu.go:177] setting up certificates
	I1202 11:50:40.584452   95364 provision.go:84] configureAuth start
	I1202 11:50:40.584511   95364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-093284-m04
	I1202 11:50:40.601701   95364 provision.go:143] copyHostCerts
	I1202 11:50:40.601739   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20033-6540/.minikube/ca.pem
	I1202 11:50:40.601769   95364 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6540/.minikube/ca.pem, removing ...
	I1202 11:50:40.601784   95364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.pem
	I1202 11:50:40.601853   95364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6540/.minikube/ca.pem (1078 bytes)
	I1202 11:50:40.601926   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20033-6540/.minikube/cert.pem
	I1202 11:50:40.601944   95364 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6540/.minikube/cert.pem, removing ...
	I1202 11:50:40.601951   95364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6540/.minikube/cert.pem
	I1202 11:50:40.601974   95364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6540/.minikube/cert.pem (1123 bytes)
	I1202 11:50:40.602050   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20033-6540/.minikube/key.pem
	I1202 11:50:40.602069   95364 exec_runner.go:144] found /home/jenkins/minikube-integration/20033-6540/.minikube/key.pem, removing ...
	I1202 11:50:40.602073   95364 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20033-6540/.minikube/key.pem
	I1202 11:50:40.602094   95364 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6540/.minikube/key.pem (1679 bytes)
	I1202 11:50:40.602148   95364 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca-key.pem org=jenkins.ha-093284-m04 san=[127.0.0.1 192.168.49.5 ha-093284-m04 localhost minikube]
	I1202 11:50:40.682309   95364 provision.go:177] copyRemoteCerts
	I1202 11:50:40.682365   95364 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 11:50:40.682399   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284-m04
	I1202 11:50:40.700146   95364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32839 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/ha-093284-m04/id_rsa Username:docker}
	I1202 11:50:40.793070   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1202 11:50:40.793125   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1202 11:50:40.815463   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1202 11:50:40.815533   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 11:50:40.838566   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1202 11:50:40.838623   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 11:50:40.860787   95364 provision.go:87] duration metric: took 276.323102ms to configureAuth
	I1202 11:50:40.860819   95364 ubuntu.go:193] setting minikube options for container-runtime
	I1202 11:50:40.861041   95364 config.go:182] Loaded profile config "ha-093284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:50:40.861135   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284-m04
	I1202 11:50:40.879934   95364 main.go:141] libmachine: Using SSH client type: native
	I1202 11:50:40.880110   95364 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 127.0.0.1 32839 <nil> <nil>}
	I1202 11:50:40.880126   95364 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 11:50:41.135401   95364 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 11:50:41.135428   95364 machine.go:96] duration metric: took 4.008852978s to provisionDockerMachine
	I1202 11:50:41.135444   95364 start.go:293] postStartSetup for "ha-093284-m04" (driver="docker")
	I1202 11:50:41.135457   95364 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 11:50:41.135523   95364 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 11:50:41.135572   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284-m04
	I1202 11:50:41.153959   95364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32839 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/ha-093284-m04/id_rsa Username:docker}
	I1202 11:50:41.249619   95364 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 11:50:41.252710   95364 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1202 11:50:41.252750   95364 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1202 11:50:41.252763   95364 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1202 11:50:41.252771   95364 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1202 11:50:41.252784   95364 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6540/.minikube/addons for local assets ...
	I1202 11:50:41.252849   95364 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6540/.minikube/files for local assets ...
	I1202 11:50:41.252935   95364 filesync.go:149] local asset: /home/jenkins/minikube-integration/20033-6540/.minikube/files/etc/ssl/certs/132992.pem -> 132992.pem in /etc/ssl/certs
	I1202 11:50:41.252946   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/files/etc/ssl/certs/132992.pem -> /etc/ssl/certs/132992.pem
	I1202 11:50:41.253063   95364 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 11:50:41.261384   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/files/etc/ssl/certs/132992.pem --> /etc/ssl/certs/132992.pem (1708 bytes)
	I1202 11:50:41.283539   95364 start.go:296] duration metric: took 148.079058ms for postStartSetup
	I1202 11:50:41.283633   95364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 11:50:41.283672   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284-m04
	I1202 11:50:41.301050   95364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32839 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/ha-093284-m04/id_rsa Username:docker}
	I1202 11:50:41.389417   95364 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1202 11:50:41.393647   95364 fix.go:56] duration metric: took 4.59975383s for fixHost
	I1202 11:50:41.393673   95364 start.go:83] releasing machines lock for "ha-093284-m04", held for 4.599807128s
	I1202 11:50:41.393740   95364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-093284-m04
	I1202 11:50:41.413923   95364 out.go:177] * Found network options:
	I1202 11:50:41.415300   95364 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1202 11:50:41.416516   95364 proxy.go:119] fail to check proxy env: Error ip not in block
	W1202 11:50:41.416540   95364 proxy.go:119] fail to check proxy env: Error ip not in block
	W1202 11:50:41.416564   95364 proxy.go:119] fail to check proxy env: Error ip not in block
	W1202 11:50:41.416582   95364 proxy.go:119] fail to check proxy env: Error ip not in block
	I1202 11:50:41.416657   95364 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 11:50:41.416711   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284-m04
	I1202 11:50:41.416727   95364 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 11:50:41.416797   95364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284-m04
	I1202 11:50:41.434763   95364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32839 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/ha-093284-m04/id_rsa Username:docker}
	I1202 11:50:41.435480   95364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32839 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/ha-093284-m04/id_rsa Username:docker}
	I1202 11:50:41.666973   95364 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1202 11:50:41.671454   95364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:50:41.679879   95364 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1202 11:50:41.679956   95364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 11:50:41.687993   95364 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 11:50:41.688013   95364 start.go:495] detecting cgroup driver to use...
	I1202 11:50:41.688044   95364 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1202 11:50:41.688078   95364 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 11:50:41.698850   95364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 11:50:41.709384   95364 docker.go:217] disabling cri-docker service (if available) ...
	I1202 11:50:41.709445   95364 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 11:50:41.722626   95364 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 11:50:41.733500   95364 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 11:50:41.815498   95364 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 11:50:41.891635   95364 docker.go:233] disabling docker service ...
	I1202 11:50:41.891693   95364 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 11:50:41.902826   95364 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 11:50:41.912994   95364 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 11:50:41.995804   95364 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 11:50:42.075449   95364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 11:50:42.086474   95364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 11:50:42.101570   95364 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1202 11:50:42.101630   95364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:50:42.111886   95364 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 11:50:42.111966   95364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:50:42.122984   95364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:50:42.132430   95364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:50:42.141566   95364 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 11:50:42.149934   95364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:50:42.159267   95364 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:50:42.168077   95364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 11:50:42.177047   95364 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 11:50:42.184843   95364 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 11:50:42.192477   95364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:50:42.258284   95364 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 11:50:42.386598   95364 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 11:50:42.386666   95364 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 11:50:42.391043   95364 start.go:563] Will wait 60s for crictl version
	I1202 11:50:42.391105   95364 ssh_runner.go:195] Run: which crictl
	I1202 11:50:42.394134   95364 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 11:50:42.426692   95364 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1202 11:50:42.426768   95364 ssh_runner.go:195] Run: crio --version
	I1202 11:50:42.459965   95364 ssh_runner.go:195] Run: crio --version
	I1202 11:50:42.495280   95364 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1202 11:50:42.496572   95364 out.go:177]   - env NO_PROXY=192.168.49.2
	I1202 11:50:42.497808   95364 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1202 11:50:42.499082   95364 cli_runner.go:164] Run: docker network inspect ha-093284 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1202 11:50:42.515938   95364 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1202 11:50:42.519521   95364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:50:42.529866   95364 mustload.go:65] Loading cluster: ha-093284
	I1202 11:50:42.530155   95364 config.go:182] Loaded profile config "ha-093284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:50:42.530472   95364 cli_runner.go:164] Run: docker container inspect ha-093284 --format={{.State.Status}}
	I1202 11:50:42.547967   95364 host.go:66] Checking if "ha-093284" exists ...
	I1202 11:50:42.548220   95364 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284 for IP: 192.168.49.5
	I1202 11:50:42.548232   95364 certs.go:194] generating shared ca certs ...
	I1202 11:50:42.548244   95364 certs.go:226] acquiring lock for ca certs: {Name:mkb9f54a1a5b06ba02335d6260145758dc70e4bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:50:42.548375   95364 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.key
	I1202 11:50:42.548414   95364 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.key
	I1202 11:50:42.548427   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1202 11:50:42.548442   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1202 11:50:42.548454   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1202 11:50:42.548467   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1202 11:50:42.548512   95364 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/13299.pem (1338 bytes)
	W1202 11:50:42.548540   95364 certs.go:480] ignoring /home/jenkins/minikube-integration/20033-6540/.minikube/certs/13299_empty.pem, impossibly tiny 0 bytes
	I1202 11:50:42.548547   95364 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 11:50:42.548569   95364 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/ca.pem (1078 bytes)
	I1202 11:50:42.548592   95364 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/cert.pem (1123 bytes)
	I1202 11:50:42.548615   95364 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/key.pem (1679 bytes)
	I1202 11:50:42.548656   95364 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6540/.minikube/files/etc/ssl/certs/132992.pem (1708 bytes)
	I1202 11:50:42.548684   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/certs/13299.pem -> /usr/share/ca-certificates/13299.pem
	I1202 11:50:42.548698   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/files/etc/ssl/certs/132992.pem -> /usr/share/ca-certificates/132992.pem
	I1202 11:50:42.548711   95364 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:50:42.548727   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 11:50:42.571207   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1202 11:50:42.596442   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 11:50:42.622964   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1202 11:50:42.651395   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/certs/13299.pem --> /usr/share/ca-certificates/13299.pem (1338 bytes)
	I1202 11:50:42.677912   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/files/etc/ssl/certs/132992.pem --> /usr/share/ca-certificates/132992.pem (1708 bytes)
	I1202 11:50:42.699628   95364 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 11:50:42.723216   95364 ssh_runner.go:195] Run: openssl version
	I1202 11:50:42.728129   95364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 11:50:42.739141   95364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:50:42.742947   95364 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 11:31 /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:50:42.743007   95364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 11:50:42.749619   95364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 11:50:42.758540   95364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13299.pem && ln -fs /usr/share/ca-certificates/13299.pem /etc/ssl/certs/13299.pem"
	I1202 11:50:42.767466   95364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13299.pem
	I1202 11:50:42.770809   95364 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 11:39 /usr/share/ca-certificates/13299.pem
	I1202 11:50:42.770857   95364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13299.pem
	I1202 11:50:42.777284   95364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13299.pem /etc/ssl/certs/51391683.0"
	I1202 11:50:42.786492   95364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/132992.pem && ln -fs /usr/share/ca-certificates/132992.pem /etc/ssl/certs/132992.pem"
	I1202 11:50:42.795480   95364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/132992.pem
	I1202 11:50:42.798864   95364 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 11:39 /usr/share/ca-certificates/132992.pem
	I1202 11:50:42.798912   95364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/132992.pem
	I1202 11:50:42.805132   95364 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/132992.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 11:50:42.813120   95364 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 11:50:42.816396   95364 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 11:50:42.816435   95364 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.2  false true} ...
	I1202 11:50:42.816510   95364 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-093284-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-093284 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 11:50:42.816556   95364 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1202 11:50:42.824508   95364 binaries.go:44] Found k8s binaries, skipping transfer
	I1202 11:50:42.824558   95364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1202 11:50:42.832649   95364 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1202 11:50:42.850434   95364 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 11:50:42.866274   95364 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1202 11:50:42.869538   95364 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 11:50:42.879586   95364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:50:42.959243   95364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:50:42.969749   95364 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1202 11:50:42.969972   95364 config.go:182] Loaded profile config "ha-093284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:50:42.972338   95364 out.go:177] * Verifying Kubernetes components...
	I1202 11:50:42.973687   95364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 11:50:43.050164   95364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 11:50:43.061793   95364 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20033-6540/kubeconfig
	I1202 11:50:43.062034   95364 kapi.go:59] client config for ha-093284: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/client.crt", KeyFile:"/home/jenkins/minikube-integration/20033-6540/.minikube/profiles/ha-093284/client.key", CAFile:"/home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1202 11:50:43.062109   95364 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1202 11:50:43.062366   95364 node_ready.go:35] waiting up to 6m0s for node "ha-093284-m04" to be "Ready" ...
	I1202 11:50:43.062448   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m04
	I1202 11:50:43.062459   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:43.062470   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:43.062480   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:43.065409   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:43.563577   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m04
	I1202 11:50:43.563596   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:43.563604   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:43.563608   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:43.566265   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:44.063396   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m04
	I1202 11:50:44.063418   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:44.063426   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:44.063430   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:44.066144   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:44.562961   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m04
	I1202 11:50:44.562983   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:44.563011   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:44.563017   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:44.565696   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:45.063473   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m04
	I1202 11:50:45.063494   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:45.063501   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:45.063505   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:45.066127   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:45.066656   95364 node_ready.go:53] node "ha-093284-m04" has status "Ready":"Unknown"
	I1202 11:50:45.562608   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m04
	I1202 11:50:45.562633   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:45.562640   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:45.562646   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:45.565607   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:46.063468   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m04
	I1202 11:50:46.063495   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:46.063510   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:46.063515   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:46.066450   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:46.563283   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m04
	I1202 11:50:46.563304   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:46.563318   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:46.563322   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:46.566219   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:47.063049   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m04
	I1202 11:50:47.063070   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:47.063078   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:47.063082   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:47.065697   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:47.563568   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m04
	I1202 11:50:47.563594   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:47.563602   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:47.563605   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:47.566443   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:47.566972   95364 node_ready.go:53] node "ha-093284-m04" has status "Ready":"Unknown"
	I1202 11:50:48.063353   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m04
	I1202 11:50:48.063376   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:48.063384   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:48.063387   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:48.066028   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:48.563557   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m04
	I1202 11:50:48.563579   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:48.563586   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:48.563590   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:48.566393   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:49.063340   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m04
	I1202 11:50:49.063362   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:49.063370   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:49.063374   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:49.066074   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:49.562975   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m04
	I1202 11:50:49.562998   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:49.563011   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:49.563016   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:49.565726   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:50.062555   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m04
	I1202 11:50:50.062586   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:50.062594   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:50.062598   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:50.065280   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:50.065793   95364 node_ready.go:53] node "ha-093284-m04" has status "Ready":"Unknown"
	I1202 11:50:50.563341   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m04
	I1202 11:50:50.563368   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:50.563380   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:50.563387   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:50.566118   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:50.566596   95364 node_ready.go:49] node "ha-093284-m04" has status "Ready":"True"
	I1202 11:50:50.566616   95364 node_ready.go:38] duration metric: took 7.504228944s for node "ha-093284-m04" to be "Ready" ...
	I1202 11:50:50.566627   95364 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:50:50.566692   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1202 11:50:50.566704   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:50.566715   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:50.566721   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:50.570861   95364 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:50:50.576888   95364 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-k72v5" in "kube-system" namespace to be "Ready" ...
	I1202 11:50:50.576991   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-k72v5
	I1202 11:50:50.577008   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:50.577020   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:50.577027   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:50.579762   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:50.580376   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:50:50.580391   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:50.580398   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:50.580401   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:50.582656   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:50.583107   95364 pod_ready.go:93] pod "coredns-7c65d6cfc9-k72v5" in "kube-system" namespace has status "Ready":"True"
	I1202 11:50:50.583123   95364 pod_ready.go:82] duration metric: took 6.207011ms for pod "coredns-7c65d6cfc9-k72v5" in "kube-system" namespace to be "Ready" ...
	I1202 11:50:50.583132   95364 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s9tph" in "kube-system" namespace to be "Ready" ...
	I1202 11:50:50.583183   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:50:50.583192   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:50.583199   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:50.583202   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:50.585293   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:50.585795   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:50:50.585809   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:50.585818   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:50.585828   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:50.587725   95364 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:50:51.083561   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:50:51.083582   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:51.083591   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:51.083596   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:51.086424   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:51.087223   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:50:51.087242   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:51.087253   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:51.087260   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:51.089573   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:51.583390   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:50:51.583413   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:51.583422   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:51.583428   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:51.586232   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:51.586960   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:50:51.586979   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:51.586989   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:51.586996   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:51.590424   95364 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:50:52.084277   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:50:52.084301   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:52.084313   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:52.084318   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:52.087524   95364 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:50:52.088156   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:50:52.088173   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:52.088181   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:52.088184   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:52.090428   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:52.583299   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:50:52.583323   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:52.583333   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:52.583338   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:52.586055   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:52.586722   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:50:52.586736   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:52.586744   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:52.586747   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:52.589150   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:52.589708   95364 pod_ready.go:103] pod "coredns-7c65d6cfc9-s9tph" in "kube-system" namespace has status "Ready":"False"
	I1202 11:50:53.084102   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:50:53.084121   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:53.084127   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:53.084130   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:53.086952   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:53.087684   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:50:53.087702   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:53.087713   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:53.087719   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:53.089830   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:53.584170   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:50:53.584191   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:53.584198   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:53.584202   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:53.587113   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:53.587831   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:50:53.587851   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:53.587862   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:53.587867   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:53.590193   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:54.083793   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:50:54.083815   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:54.083825   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:54.083830   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:54.086824   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:54.087473   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:50:54.087489   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:54.087503   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:54.087507   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:54.089904   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:54.583722   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:50:54.583747   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:54.583759   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:54.583765   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:54.586662   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:54.587386   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:50:54.587401   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:54.587409   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:54.587413   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:54.589690   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:54.590324   95364 pod_ready.go:103] pod "coredns-7c65d6cfc9-s9tph" in "kube-system" namespace has status "Ready":"False"
	I1202 11:50:55.083430   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:50:55.083454   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:55.083462   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:55.083467   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:55.086372   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:55.087129   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:50:55.087149   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:55.087160   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:55.087166   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:55.091636   95364 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1202 11:50:55.583538   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:50:55.583559   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:55.583567   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:55.583570   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:55.586582   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:55.587359   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:50:55.587375   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:55.587383   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:55.587391   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:55.589953   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:56.083942   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:50:56.083966   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:56.083973   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:56.083977   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:56.086879   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:56.087639   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:50:56.087656   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:56.087663   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:56.087666   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:56.089957   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:56.583779   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:50:56.583799   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:56.583807   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:56.583812   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:56.586881   95364 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:50:56.587552   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:50:56.587569   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:56.587577   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:56.587581   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:56.590083   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:56.590556   95364 pod_ready.go:103] pod "coredns-7c65d6cfc9-s9tph" in "kube-system" namespace has status "Ready":"False"
	I1202 11:50:57.083978   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:50:57.084001   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:57.084008   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:57.084011   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:57.087066   95364 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:50:57.087629   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:50:57.087646   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:57.087652   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:57.087655   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:57.089814   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:57.583666   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:50:57.583690   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:57.583700   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:57.583706   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:57.586614   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:57.587439   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:50:57.587463   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:57.587476   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:57.587482   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:57.589854   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:58.083635   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:50:58.083660   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:58.083671   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:58.083676   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:58.086710   95364 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:50:58.087315   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:50:58.087336   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:58.087344   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:58.087350   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:58.089933   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:58.583940   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:50:58.583963   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:58.583970   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:58.583975   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:58.586783   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:58.587465   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:50:58.587487   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:58.587494   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:58.587498   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:58.590145   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:58.590684   95364 pod_ready.go:103] pod "coredns-7c65d6cfc9-s9tph" in "kube-system" namespace has status "Ready":"False"
	I1202 11:50:59.083346   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:50:59.083371   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:59.083380   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:59.083384   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:59.086661   95364 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:50:59.087446   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:50:59.087466   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:59.087477   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:59.087484   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:59.089938   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:59.583887   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:50:59.583912   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:59.583922   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:59.583928   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:59.586823   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:50:59.587517   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:50:59.587532   95364 round_trippers.go:469] Request Headers:
	I1202 11:50:59.587540   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:50:59.587543   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:50:59.590010   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:00.083956   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:00.083977   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:00.083984   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:00.083988   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:00.086920   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:00.087638   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:00.087655   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:00.087662   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:00.087667   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:00.089774   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:00.583875   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:00.583896   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:00.583905   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:00.583908   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:00.586683   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:00.587322   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:00.587338   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:00.587346   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:00.587351   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:00.589703   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:01.083543   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:01.083564   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:01.083571   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:01.083576   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:01.086369   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:01.087605   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:01.087626   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:01.087639   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:01.087644   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:01.090599   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:01.091084   95364 pod_ready.go:103] pod "coredns-7c65d6cfc9-s9tph" in "kube-system" namespace has status "Ready":"False"
	I1202 11:51:01.583383   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:01.583405   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:01.583412   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:01.583417   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:01.586318   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:01.587034   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:01.587050   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:01.587057   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:01.587060   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:01.589555   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:02.083322   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:02.083359   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:02.083368   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:02.083373   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:02.086143   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:02.086750   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:02.086768   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:02.086775   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:02.086779   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:02.088848   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:02.583588   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:02.583608   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:02.583616   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:02.583620   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:02.586312   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:02.586973   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:02.586991   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:02.586999   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:02.587003   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:02.589255   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:03.084176   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:03.084204   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:03.084215   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:03.084221   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:03.087006   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:03.087763   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:03.087784   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:03.087794   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:03.087802   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:03.090094   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:03.583297   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:03.583320   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:03.583329   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:03.583345   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:03.586142   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:03.586800   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:03.586817   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:03.586824   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:03.586828   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:03.589102   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:03.589547   95364 pod_ready.go:103] pod "coredns-7c65d6cfc9-s9tph" in "kube-system" namespace has status "Ready":"False"
	I1202 11:51:04.083922   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:04.083944   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:04.083952   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:04.083955   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:04.086832   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:04.087524   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:04.087552   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:04.087564   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:04.087570   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:04.090319   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:04.584221   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:04.584241   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:04.584249   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:04.584253   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:04.587377   95364 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:51:04.588134   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:04.588154   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:04.588165   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:04.588171   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:04.590474   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:05.084335   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:05.084356   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:05.084372   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:05.084378   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:05.087190   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:05.087757   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:05.087772   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:05.087783   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:05.087789   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:05.090002   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:05.584002   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:05.584023   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:05.584031   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:05.584036   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:05.587039   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:05.587728   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:05.587743   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:05.587750   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:05.587755   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:05.590115   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:05.590561   95364 pod_ready.go:103] pod "coredns-7c65d6cfc9-s9tph" in "kube-system" namespace has status "Ready":"False"
	I1202 11:51:06.083941   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:06.083961   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:06.083969   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:06.083972   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:06.086781   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:06.087414   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:06.087431   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:06.087440   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:06.087446   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:06.089917   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:06.583677   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:06.583700   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:06.583709   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:06.583716   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:06.586449   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:06.587247   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:06.587269   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:06.587280   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:06.587287   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:06.589639   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:07.083444   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:07.083471   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:07.083479   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:07.083484   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:07.086357   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:07.087042   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:07.087057   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:07.087064   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:07.087068   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:07.089471   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:07.584391   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:07.584413   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:07.584420   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:07.584425   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:07.587205   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:07.587838   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:07.587855   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:07.587862   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:07.587867   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:07.590069   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:08.083998   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:08.084018   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:08.084032   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:08.084036   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:08.086827   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:08.087436   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:08.087452   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:08.087460   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:08.087466   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:08.089638   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:08.090178   95364 pod_ready.go:103] pod "coredns-7c65d6cfc9-s9tph" in "kube-system" namespace has status "Ready":"False"
	I1202 11:51:08.583276   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:08.583295   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:08.583303   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:08.583306   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:08.586114   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:08.586719   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:08.586734   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:08.586741   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:08.586745   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:08.588936   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:09.083751   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:09.083777   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:09.083786   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:09.083791   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:09.086824   95364 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:51:09.087577   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:09.087595   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:09.087602   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:09.087606   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:09.090134   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:09.584007   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:09.584033   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:09.584041   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:09.584045   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:09.587093   95364 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:51:09.587734   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:09.587751   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:09.587759   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:09.587764   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:09.590128   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:10.084020   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:10.084043   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:10.084051   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:10.084054   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:10.086898   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:10.087567   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:10.087585   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:10.087593   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:10.087597   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:10.090039   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:10.090546   95364 pod_ready.go:103] pod "coredns-7c65d6cfc9-s9tph" in "kube-system" namespace has status "Ready":"False"
	I1202 11:51:10.584081   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:10.584101   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:10.584109   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:10.584113   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:10.587058   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:10.587763   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:10.587779   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:10.587786   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:10.587789   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:10.589901   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:11.083402   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:11.083426   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:11.083438   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:11.083445   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:11.086359   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:11.087153   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:11.087168   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:11.087175   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:11.087180   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:11.089390   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:11.584349   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:11.584369   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:11.584376   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:11.584380   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:11.587448   95364 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:51:11.588116   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:11.588134   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:11.588141   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:11.588146   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:11.590695   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:12.083523   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:12.083544   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:12.083554   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:12.083561   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:12.086246   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:12.086906   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:12.086923   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:12.086933   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:12.086939   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:12.089373   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:12.584327   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s9tph
	I1202 11:51:12.584353   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:12.584362   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:12.584367   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:12.587444   95364 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:51:12.588112   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:12.588130   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:12.588138   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:12.588143   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:12.590491   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:12.590956   95364 pod_ready.go:98] node "ha-093284" hosting pod "coredns-7c65d6cfc9-s9tph" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-093284" has status "Ready":"Unknown"
	I1202 11:51:12.590979   95364 pod_ready.go:82] duration metric: took 22.007840148s for pod "coredns-7c65d6cfc9-s9tph" in "kube-system" namespace to be "Ready" ...
	E1202 11:51:12.590988   95364 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-093284" hosting pod "coredns-7c65d6cfc9-s9tph" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-093284" has status "Ready":"Unknown"
	I1202 11:51:12.590995   95364 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-093284" in "kube-system" namespace to be "Ready" ...
	I1202 11:51:12.591067   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-093284
	I1202 11:51:12.591075   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:12.591082   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:12.591086   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:12.593157   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:12.593703   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:12.593719   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:12.593726   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:12.593731   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:12.595746   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:12.596371   95364 pod_ready.go:98] node "ha-093284" hosting pod "etcd-ha-093284" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-093284" has status "Ready":"Unknown"
	I1202 11:51:12.596395   95364 pod_ready.go:82] duration metric: took 5.390806ms for pod "etcd-ha-093284" in "kube-system" namespace to be "Ready" ...
	E1202 11:51:12.596449   95364 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-093284" hosting pod "etcd-ha-093284" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-093284" has status "Ready":"Unknown"
	I1202 11:51:12.596494   95364 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-093284-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:51:12.596569   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-093284-m02
	I1202 11:51:12.596580   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:12.596591   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:12.596601   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:12.598542   95364 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:51:12.599075   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m02
	I1202 11:51:12.599091   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:12.599098   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:12.599102   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:12.601087   95364 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:51:12.601537   95364 pod_ready.go:93] pod "etcd-ha-093284-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:51:12.601555   95364 pod_ready.go:82] duration metric: took 5.050069ms for pod "etcd-ha-093284-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:51:12.601571   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-093284" in "kube-system" namespace to be "Ready" ...
	I1202 11:51:12.601617   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-093284
	I1202 11:51:12.601627   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:12.601634   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:12.601638   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:12.603449   95364 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:51:12.603974   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:12.603989   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:12.603995   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:12.604000   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:12.605757   95364 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:51:12.606232   95364 pod_ready.go:98] node "ha-093284" hosting pod "kube-apiserver-ha-093284" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-093284" has status "Ready":"Unknown"
	I1202 11:51:12.606250   95364 pod_ready.go:82] duration metric: took 4.674049ms for pod "kube-apiserver-ha-093284" in "kube-system" namespace to be "Ready" ...
	E1202 11:51:12.606258   95364 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-093284" hosting pod "kube-apiserver-ha-093284" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-093284" has status "Ready":"Unknown"
	I1202 11:51:12.606266   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-093284-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:51:12.606313   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-093284-m02
	I1202 11:51:12.606322   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:12.606328   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:12.606333   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:12.608301   95364 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:51:12.608857   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m02
	I1202 11:51:12.608871   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:12.608880   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:12.608885   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:12.610592   95364 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1202 11:51:12.610957   95364 pod_ready.go:93] pod "kube-apiserver-ha-093284-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:51:12.610973   95364 pod_ready.go:82] duration metric: took 4.701939ms for pod "kube-apiserver-ha-093284-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:51:12.610982   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-093284" in "kube-system" namespace to be "Ready" ...
	I1202 11:51:12.784336   95364 request.go:632] Waited for 173.255183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-093284
	I1202 11:51:12.784391   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-093284
	I1202 11:51:12.784396   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:12.784404   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:12.784408   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:12.787264   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:12.984391   95364 request.go:632] Waited for 196.296847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:12.984457   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:12.984464   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:12.984476   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:12.984495   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:12.987302   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:12.987781   95364 pod_ready.go:98] node "ha-093284" hosting pod "kube-controller-manager-ha-093284" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-093284" has status "Ready":"Unknown"
	I1202 11:51:12.987799   95364 pod_ready.go:82] duration metric: took 376.811636ms for pod "kube-controller-manager-ha-093284" in "kube-system" namespace to be "Ready" ...
	E1202 11:51:12.987808   95364 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-093284" hosting pod "kube-controller-manager-ha-093284" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-093284" has status "Ready":"Unknown"
	I1202 11:51:12.987815   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-093284-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:51:13.184994   95364 request.go:632] Waited for 197.106004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-093284-m02
	I1202 11:51:13.185075   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-093284-m02
	I1202 11:51:13.185085   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:13.185093   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:13.185097   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:13.187813   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:13.384925   95364 request.go:632] Waited for 196.365564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284-m02
	I1202 11:51:13.384994   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m02
	I1202 11:51:13.385001   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:13.385009   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:13.385017   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:13.387535   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:13.387956   95364 pod_ready.go:93] pod "kube-controller-manager-ha-093284-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:51:13.387974   95364 pod_ready.go:82] duration metric: took 400.150583ms for pod "kube-controller-manager-ha-093284-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:51:13.387985   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ddc8v" in "kube-system" namespace to be "Ready" ...
	I1202 11:51:13.584357   95364 request.go:632] Waited for 196.276246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ddc8v
	I1202 11:51:13.584436   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ddc8v
	I1202 11:51:13.584444   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:13.584452   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:13.584458   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:13.587450   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:13.784361   95364 request.go:632] Waited for 196.300455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:13.784412   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:13.784419   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:13.784437   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:13.784447   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:13.787192   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:13.787662   95364 pod_ready.go:98] node "ha-093284" hosting pod "kube-proxy-ddc8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-093284" has status "Ready":"Unknown"
	I1202 11:51:13.787684   95364 pod_ready.go:82] duration metric: took 399.671324ms for pod "kube-proxy-ddc8v" in "kube-system" namespace to be "Ready" ...
	E1202 11:51:13.787693   95364 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-093284" hosting pod "kube-proxy-ddc8v" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-093284" has status "Ready":"Unknown"
	I1202 11:51:13.787702   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g5zm7" in "kube-system" namespace to be "Ready" ...
	I1202 11:51:13.984709   95364 request.go:632] Waited for 196.945142ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5zm7
	I1202 11:51:13.984797   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5zm7
	I1202 11:51:13.984813   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:13.984821   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:13.984826   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:13.987645   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:14.184600   95364 request.go:632] Waited for 196.351019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284-m02
	I1202 11:51:14.184686   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m02
	I1202 11:51:14.184701   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:14.184717   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:14.184728   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:14.187591   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:14.188082   95364 pod_ready.go:93] pod "kube-proxy-g5zm7" in "kube-system" namespace has status "Ready":"True"
	I1202 11:51:14.188105   95364 pod_ready.go:82] duration metric: took 400.392619ms for pod "kube-proxy-g5zm7" in "kube-system" namespace to be "Ready" ...
	I1202 11:51:14.188120   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nbwvv" in "kube-system" namespace to be "Ready" ...
	I1202 11:51:14.385158   95364 request.go:632] Waited for 196.946145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nbwvv
	I1202 11:51:14.385217   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nbwvv
	I1202 11:51:14.385223   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:14.385231   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:14.385238   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:14.387981   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:14.584963   95364 request.go:632] Waited for 196.344417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284-m04
	I1202 11:51:14.585036   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m04
	I1202 11:51:14.585042   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:14.585049   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:14.585056   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:14.587744   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:14.588227   95364 pod_ready.go:93] pod "kube-proxy-nbwvv" in "kube-system" namespace has status "Ready":"True"
	I1202 11:51:14.588252   95364 pod_ready.go:82] duration metric: took 400.118835ms for pod "kube-proxy-nbwvv" in "kube-system" namespace to be "Ready" ...
	I1202 11:51:14.588289   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-093284" in "kube-system" namespace to be "Ready" ...
	I1202 11:51:14.785346   95364 request.go:632] Waited for 196.949816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-093284
	I1202 11:51:14.785412   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-093284
	I1202 11:51:14.785420   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:14.785430   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:14.785443   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:14.788560   95364 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:51:14.984382   95364 request.go:632] Waited for 195.221809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:14.984455   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284
	I1202 11:51:14.984462   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:14.984474   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:14.984484   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:14.987406   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:14.987944   95364 pod_ready.go:98] node "ha-093284" hosting pod "kube-scheduler-ha-093284" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-093284" has status "Ready":"Unknown"
	I1202 11:51:14.987968   95364 pod_ready.go:82] duration metric: took 399.666774ms for pod "kube-scheduler-ha-093284" in "kube-system" namespace to be "Ready" ...
	E1202 11:51:14.987979   95364 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-093284" hosting pod "kube-scheduler-ha-093284" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-093284" has status "Ready":"Unknown"
	I1202 11:51:14.987988   95364 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-093284-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:51:15.185055   95364 request.go:632] Waited for 196.939397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-093284-m02
	I1202 11:51:15.185126   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-093284-m02
	I1202 11:51:15.185137   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:15.185148   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:15.185156   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:15.187934   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:15.384828   95364 request.go:632] Waited for 196.354459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-093284-m02
	I1202 11:51:15.384896   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-093284-m02
	I1202 11:51:15.384907   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:15.384917   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:15.384930   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:15.387758   95364 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1202 11:51:15.388401   95364 pod_ready.go:93] pod "kube-scheduler-ha-093284-m02" in "kube-system" namespace has status "Ready":"True"
	I1202 11:51:15.388422   95364 pod_ready.go:82] duration metric: took 400.426899ms for pod "kube-scheduler-ha-093284-m02" in "kube-system" namespace to be "Ready" ...
	I1202 11:51:15.388434   95364 pod_ready.go:39] duration metric: took 24.821795674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1202 11:51:15.388450   95364 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 11:51:15.388498   95364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:51:15.399898   95364 system_svc.go:56] duration metric: took 11.439431ms WaitForService to wait for kubelet
	I1202 11:51:15.399927   95364 kubeadm.go:582] duration metric: took 32.430141317s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 11:51:15.399942   95364 node_conditions.go:102] verifying NodePressure condition ...
	I1202 11:51:15.585330   95364 request.go:632] Waited for 185.30097ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1202 11:51:15.585417   95364 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I1202 11:51:15.585430   95364 round_trippers.go:469] Request Headers:
	I1202 11:51:15.585441   95364 round_trippers.go:473]     Accept: application/json, */*
	I1202 11:51:15.585449   95364 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1202 11:51:15.588787   95364 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1202 11:51:15.589787   95364 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 11:51:15.589806   95364 node_conditions.go:123] node cpu capacity is 8
	I1202 11:51:15.589816   95364 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 11:51:15.589819   95364 node_conditions.go:123] node cpu capacity is 8
	I1202 11:51:15.589823   95364 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1202 11:51:15.589826   95364 node_conditions.go:123] node cpu capacity is 8
	I1202 11:51:15.589829   95364 node_conditions.go:105] duration metric: took 189.883046ms to run NodePressure ...
	I1202 11:51:15.589842   95364 start.go:241] waiting for startup goroutines ...
	I1202 11:51:15.589873   95364 start.go:255] writing updated cluster config ...
	I1202 11:51:15.590163   95364 ssh_runner.go:195] Run: rm -f paused
	I1202 11:51:15.637492   95364 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1202 11:51:15.639750   95364 out.go:177] * Done! kubectl is now configured to use "ha-093284" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 11:50:32 ha-093284 crio[685]: time="2024-12-02 11:50:32.109265462Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c7335745f9de46596c3f04f94f82bcd2aa02b583e8a0e38feed0a6e55e958841/merged/etc/group: no such file or directory"
	Dec 02 11:50:32 ha-093284 crio[685]: time="2024-12-02 11:50:32.150235311Z" level=info msg="Created container 98cc1221fda6b2c1ca79f202b21d226a992e9b7769dfb0e47853733bbb16a77a: kube-system/kube-vip-ha-093284/kube-vip" id=73cf7204-de45-4ff7-95e8-1103c986a670 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 11:50:32 ha-093284 crio[685]: time="2024-12-02 11:50:32.150979793Z" level=info msg="Starting container: 98cc1221fda6b2c1ca79f202b21d226a992e9b7769dfb0e47853733bbb16a77a" id=54a3cde9-3c08-484f-90d6-f90b65d9b885 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 11:50:32 ha-093284 crio[685]: time="2024-12-02 11:50:32.160871826Z" level=info msg="Started container" PID=2045 containerID=98cc1221fda6b2c1ca79f202b21d226a992e9b7769dfb0e47853733bbb16a77a description=kube-system/kube-vip-ha-093284/kube-vip id=54a3cde9-3c08-484f-90d6-f90b65d9b885 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b338f24b59772dda6c84c424fada270768def792ec581a5c2b97dd3a9445d43c
	Dec 02 11:50:41 ha-093284 crio[685]: time="2024-12-02 11:50:41.921878361Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.2" id=1559973c-f197-4c03-b42a-b138bc5750ab name=/runtime.v1.ImageService/ImageStatus
	Dec 02 11:50:41 ha-093284 crio[685]: time="2024-12-02 11:50:41.922185951Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.2],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752],Size_:89474374,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=1559973c-f197-4c03-b42a-b138bc5750ab name=/runtime.v1.ImageService/ImageStatus
	Dec 02 11:50:41 ha-093284 crio[685]: time="2024-12-02 11:50:41.922859405Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.2" id=72fe5444-f60c-4dc2-bf5c-97a170210424 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 11:50:41 ha-093284 crio[685]: time="2024-12-02 11:50:41.923118356Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.2],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752],Size_:89474374,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=72fe5444-f60c-4dc2-bf5c-97a170210424 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 11:50:41 ha-093284 crio[685]: time="2024-12-02 11:50:41.923743278Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-093284/kube-controller-manager" id=a37455ac-492b-46f8-b485-75302e8c5a5e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 11:50:41 ha-093284 crio[685]: time="2024-12-02 11:50:41.923849757Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 02 11:50:41 ha-093284 crio[685]: time="2024-12-02 11:50:41.997059871Z" level=info msg="Created container 948983526a98b84dd4f01bf5a95deaaae98d812687416365d0d8779294effd0e: kube-system/kube-controller-manager-ha-093284/kube-controller-manager" id=a37455ac-492b-46f8-b485-75302e8c5a5e name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 11:50:41 ha-093284 crio[685]: time="2024-12-02 11:50:41.997668884Z" level=info msg="Starting container: 948983526a98b84dd4f01bf5a95deaaae98d812687416365d0d8779294effd0e" id=4f09c762-6350-4c99-afe9-f22ed305c760 name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 11:50:42 ha-093284 crio[685]: time="2024-12-02 11:50:42.003573388Z" level=info msg="Started container" PID=2091 containerID=948983526a98b84dd4f01bf5a95deaaae98d812687416365d0d8779294effd0e description=kube-system/kube-controller-manager-ha-093284/kube-controller-manager id=4f09c762-6350-4c99-afe9-f22ed305c760 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3eb42ddef09e3bda02d65eaebaba4268f042802efc6ee2ad06df8c409f72da55
	Dec 02 11:50:42 ha-093284 conmon[1500]: conmon 94f2b4944836e073d3a5 <ninfo>: container 1512 exited with status 1
	Dec 02 11:50:43 ha-093284 crio[685]: time="2024-12-02 11:50:43.121245742Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7608d97a-dbb6-455d-b19a-dbd25dedca34 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 11:50:43 ha-093284 crio[685]: time="2024-12-02 11:50:43.121430678Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=7608d97a-dbb6-455d-b19a-dbd25dedca34 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 11:50:43 ha-093284 crio[685]: time="2024-12-02 11:50:43.122080870Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=17fc7b89-4aeb-4b59-87af-ef5afe6aea86 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 11:50:43 ha-093284 crio[685]: time="2024-12-02 11:50:43.122293586Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=17fc7b89-4aeb-4b59-87af-ef5afe6aea86 name=/runtime.v1.ImageService/ImageStatus
	Dec 02 11:50:43 ha-093284 crio[685]: time="2024-12-02 11:50:43.122857123Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2eb0ff10-a8b8-45df-9889-9f4b37b01de9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 11:50:43 ha-093284 crio[685]: time="2024-12-02 11:50:43.122958579Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 02 11:50:43 ha-093284 crio[685]: time="2024-12-02 11:50:43.135822132Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/03c2dc35c741ab657c1aecb04445b6c2a8e476c39a63e6c488d1d2adb2a20d48/merged/etc/passwd: no such file or directory"
	Dec 02 11:50:43 ha-093284 crio[685]: time="2024-12-02 11:50:43.135858961Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/03c2dc35c741ab657c1aecb04445b6c2a8e476c39a63e6c488d1d2adb2a20d48/merged/etc/group: no such file or directory"
	Dec 02 11:50:43 ha-093284 crio[685]: time="2024-12-02 11:50:43.169803194Z" level=info msg="Created container c563fe007869df9bce74f26a63cd2c1947398f03962e573e0ab6f8e065d17b55: kube-system/storage-provisioner/storage-provisioner" id=2eb0ff10-a8b8-45df-9889-9f4b37b01de9 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 02 11:50:43 ha-093284 crio[685]: time="2024-12-02 11:50:43.170416903Z" level=info msg="Starting container: c563fe007869df9bce74f26a63cd2c1947398f03962e573e0ab6f8e065d17b55" id=a2ea9e70-c67b-4f28-96b6-541f267e93db name=/runtime.v1.RuntimeService/StartContainer
	Dec 02 11:50:43 ha-093284 crio[685]: time="2024-12-02 11:50:43.175771727Z" level=info msg="Started container" PID=2150 containerID=c563fe007869df9bce74f26a63cd2c1947398f03962e573e0ab6f8e065d17b55 description=kube-system/storage-provisioner/storage-provisioner id=a2ea9e70-c67b-4f28-96b6-541f267e93db name=/runtime.v1.RuntimeService/StartContainer sandboxID=1516cdd4f33cc6c7350c383bc77637c6ed99d8fc7631802fd5ab05b507710dc9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c563fe007869d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   34 seconds ago       Running             storage-provisioner       5                   1516cdd4f33cc       storage-provisioner
	948983526a98b       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   35 seconds ago       Running             kube-controller-manager   6                   3eb42ddef09e3       kube-controller-manager-ha-093284
	98cc1221fda6b       4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812   45 seconds ago       Running             kube-vip                  3                   b338f24b59772       kube-vip-ha-093284
	1a649cec3a74d       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   48 seconds ago       Running             kube-apiserver            4                   60bcb03168759       kube-apiserver-ha-093284
	6d1ab74f3fa6e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Running             coredns                   2                   1f51867795fea       coredns-7c65d6cfc9-k72v5
	faeb644493644       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   About a minute ago   Running             busybox                   2                   f3278892a5bb8       busybox-7dff88458-wljw5
	bbc84830793d8       9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5   About a minute ago   Running             kindnet-cni               2                   715a74340d759       kindnet-6z757
	5283f28824984       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   About a minute ago   Running             kube-proxy                2                   7cb568703d0c3       kube-proxy-ddc8v
	94f2b4944836e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Exited              storage-provisioner       4                   1516cdd4f33cc       storage-provisioner
	8eea79962cb8a       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   About a minute ago   Exited              kube-controller-manager   5                   3eb42ddef09e3       kube-controller-manager-ha-093284
	ee2d0acb98ab2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Running             coredns                   2                   3f3627992fd27       coredns-7c65d6cfc9-s9tph
	c82502e5a5109       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Running             etcd                      2                   906ab14635daa       etcd-ha-093284
	edf20a69b72e4       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   About a minute ago   Exited              kube-apiserver            3                   60bcb03168759       kube-apiserver-ha-093284
	d3785b0414109       4b34defda8067616b43718c1281963ddcae6790077aa451c8d0cca8e07f5d812   About a minute ago   Exited              kube-vip                  2                   b338f24b59772       kube-vip-ha-093284
	d7814a4d94ddc       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   About a minute ago   Running             kube-scheduler            2                   1f2098f2676e5       kube-scheduler-ha-093284
	
	
	==> coredns [6d1ab74f3fa6e9f77c4272bbb71657fe37b8062b7d90aa87892c4a5070840118] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37619 - 56933 "HINFO IN 5763497922776823481.8357622621567558867. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.159201055s
	
	
	==> coredns [ee2d0acb98ab27dcc607b128e115f0b685a41c66b1939052a4d1b61cd8d84444] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:36791 - 24333 "HINFO IN 2762028260825981439.8537255287300310061. udp 57 false 512" - - 0 6.00054649s
	[ERROR] plugin/errors: 2 2762028260825981439.8537255287300310061. HINFO: read udp 10.244.0.2:48555->192.168.49.1:53: i/o timeout
	[INFO] 127.0.0.1:38384 - 40765 "HINFO IN 2762028260825981439.8537255287300310061. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027235672s
	[INFO] 127.0.0.1:59151 - 25845 "HINFO IN 2762028260825981439.8537255287300310061. udp 57 false 512" NXDOMAIN qr,rd,ra 57 4.003065745s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2006035588]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (02-Dec-2024 11:50:04.010) (total time: 30000ms):
	Trace[2006035588]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:50:34.010)
	Trace[2006035588]: [30.000495527s] [30.000495527s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[795860390]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (02-Dec-2024 11:50:04.010) (total time: 30000ms):
	Trace[795860390]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:50:34.010)
	Trace[795860390]: [30.000709387s] [30.000709387s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[752398707]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (02-Dec-2024 11:50:04.010) (total time: 30000ms):
	Trace[752398707]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:50:34.011)
	Trace[752398707]: [30.000270464s] [30.000270464s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-093284
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-093284
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=ha-093284
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_02T11_43_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:43:01 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-093284
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:51:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 02 Dec 2024 11:49:57 +0000   Mon, 02 Dec 2024 11:51:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 02 Dec 2024 11:49:57 +0000   Mon, 02 Dec 2024 11:51:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 02 Dec 2024 11:49:57 +0000   Mon, 02 Dec 2024 11:51:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 02 Dec 2024 11:49:57 +0000   Mon, 02 Dec 2024 11:51:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-093284
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 9d95bec8cf98463185954781ad850c56
	  System UUID:                e0383247-6b5b-45ba-908b-397ac227b865
	  Boot ID:                    2a9b6797-354b-47aa-b86d-31dcdc265ca8
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wljw5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m56s
	  kube-system                 coredns-7c65d6cfc9-k72v5             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m11s
	  kube-system                 coredns-7c65d6cfc9-s9tph             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m11s
	  kube-system                 etcd-ha-093284                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m16s
	  kube-system                 kindnet-6z757                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m12s
	  kube-system                 kube-apiserver-ha-093284             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 kube-controller-manager-ha-093284    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 kube-proxy-ddc8v                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 kube-scheduler-ha-093284             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 kube-vip-ha-093284                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m11s                  kube-proxy       
	  Normal   Starting                 63s                    kube-proxy       
	  Normal   Starting                 3m42s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  8m16s                  kubelet          Node ha-093284 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 8m16s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 8m16s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    8m16s                  kubelet          Node ha-093284 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m16s                  kubelet          Node ha-093284 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m12s                  node-controller  Node ha-093284 event: Registered Node ha-093284 in Controller
	  Normal   NodeReady                7m58s                  kubelet          Node ha-093284 status is now: NodeReady
	  Normal   RegisteredNode           7m48s                  node-controller  Node ha-093284 event: Registered Node ha-093284 in Controller
	  Normal   RegisteredNode           7m13s                  node-controller  Node ha-093284 event: Registered Node ha-093284 in Controller
	  Normal   RegisteredNode           5m19s                  node-controller  Node ha-093284 event: Registered Node ha-093284 in Controller
	  Normal   NodeHasSufficientPID     4m39s (x7 over 4m39s)  kubelet          Node ha-093284 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    4m39s (x8 over 4m39s)  kubelet          Node ha-093284 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  4m39s (x8 over 4m39s)  kubelet          Node ha-093284 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 4m39s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 4m39s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           4m4s                   node-controller  Node ha-093284 event: Registered Node ha-093284 in Controller
	  Normal   RegisteredNode           3m59s                  node-controller  Node ha-093284 event: Registered Node ha-093284 in Controller
	  Normal   RegisteredNode           3m14s                  node-controller  Node ha-093284 event: Registered Node ha-093284 in Controller
	  Normal   Starting                 118s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 118s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  117s (x8 over 118s)    kubelet          Node ha-093284 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    117s (x8 over 118s)    kubelet          Node ha-093284 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     117s (x7 over 118s)    kubelet          Node ha-093284 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           85s                    node-controller  Node ha-093284 event: Registered Node ha-093284 in Controller
	  Normal   RegisteredNode           34s                    node-controller  Node ha-093284 event: Registered Node ha-093284 in Controller
	  Normal   NodeNotReady             5s                     node-controller  Node ha-093284 status is now: NodeNotReady
	
	
	Name:               ha-093284-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-093284-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=ha-093284
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_02T11_43_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:43:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-093284-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:51:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 11:49:50 +0000   Mon, 02 Dec 2024 11:43:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 11:49:50 +0000   Mon, 02 Dec 2024 11:43:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 11:49:50 +0000   Mon, 02 Dec 2024 11:43:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 11:49:50 +0000   Mon, 02 Dec 2024 11:43:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-093284-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d86a329d5ab4be68a386a53e961d55a
	  System UUID:                709e9537-58a1-41b6-be32-fe51e381fdf8
	  Boot ID:                    2a9b6797-354b-47aa-b86d-31dcdc265ca8
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fwgsp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m57s
	  kube-system                 etcd-ha-093284-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m55s
	  kube-system                 kindnet-qtflb                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m57s
	  kube-system                 kube-apiserver-ha-093284-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m55s
	  kube-system                 kube-controller-manager-ha-093284-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m55s
	  kube-system                 kube-proxy-g5zm7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m57s
	  kube-system                 kube-scheduler-ha-093284-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m55s
	  kube-system                 kube-vip-ha-093284-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m53s                  kube-proxy       
	  Normal   Starting                 5m24s                  kube-proxy       
	  Normal   Starting                 4m2s                   kube-proxy       
	  Normal   Starting                 73s                    kube-proxy       
	  Normal   NodeHasSufficientPID     7m56s (x7 over 7m57s)  kubelet          Node ha-093284-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    7m56s (x8 over 7m57s)  kubelet          Node ha-093284-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  7m56s (x8 over 7m57s)  kubelet          Node ha-093284-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           7m53s                  node-controller  Node ha-093284-m02 event: Registered Node ha-093284-m02 in Controller
	  Normal   RegisteredNode           7m49s                  node-controller  Node ha-093284-m02 event: Registered Node ha-093284-m02 in Controller
	  Normal   RegisteredNode           7m14s                  node-controller  Node ha-093284-m02 event: Registered Node ha-093284-m02 in Controller
	  Normal   NodeHasSufficientPID     5m48s (x7 over 5m48s)  kubelet          Node ha-093284-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    5m48s (x8 over 5m48s)  kubelet          Node ha-093284-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 5m48s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m48s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m48s (x8 over 5m48s)  kubelet          Node ha-093284-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           5m20s                  node-controller  Node ha-093284-m02 event: Registered Node ha-093284-m02 in Controller
	  Normal   NodeHasSufficientMemory  4m38s (x8 over 4m38s)  kubelet          Node ha-093284-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     4m38s (x7 over 4m38s)  kubelet          Node ha-093284-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    4m38s (x8 over 4m38s)  kubelet          Node ha-093284-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 4m38s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m38s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           4m5s                   node-controller  Node ha-093284-m02 event: Registered Node ha-093284-m02 in Controller
	  Normal   RegisteredNode           4m                     node-controller  Node ha-093284-m02 event: Registered Node ha-093284-m02 in Controller
	  Normal   RegisteredNode           3m15s                  node-controller  Node ha-093284-m02 event: Registered Node ha-093284-m02 in Controller
	  Normal   Starting                 117s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 117s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  117s (x8 over 117s)    kubelet          Node ha-093284-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    117s (x8 over 117s)    kubelet          Node ha-093284-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     117s (x7 over 117s)    kubelet          Node ha-093284-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           86s                    node-controller  Node ha-093284-m02 event: Registered Node ha-093284-m02 in Controller
	  Normal   RegisteredNode           35s                    node-controller  Node ha-093284-m02 event: Registered Node ha-093284-m02 in Controller
	
	
	Name:               ha-093284-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-093284-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
	                    minikube.k8s.io/name=ha-093284
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_02T11_44_37_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Dec 2024 11:44:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-093284-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Dec 2024 11:51:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Dec 2024 11:50:50 +0000   Mon, 02 Dec 2024 11:50:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Dec 2024 11:50:50 +0000   Mon, 02 Dec 2024 11:50:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Dec 2024 11:50:50 +0000   Mon, 02 Dec 2024 11:50:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Dec 2024 11:50:50 +0000   Mon, 02 Dec 2024 11:50:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-093284-m04
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 f48295ac251443ce8f10f1461a9e7e3f
	  System UUID:                70854005-7887-4975-8ca3-9135917673df
	  Boot ID:                    2a9b6797-354b-47aa-b86d-31dcdc265ca8
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7fk6g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m52s
	  kube-system                 kindnet-7mpq6              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m41s
	  kube-system                 kube-proxy-nbwvv           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m53s                  kube-proxy       
	  Normal   Starting                 6m39s                  kube-proxy       
	  Normal   Starting                 20s                    kube-proxy       
	  Normal   NodeHasSufficientPID     6m41s (x2 over 6m41s)  kubelet          Node ha-093284-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    6m41s (x2 over 6m41s)  kubelet          Node ha-093284-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  6m41s (x2 over 6m41s)  kubelet          Node ha-093284-m04 status is now: NodeHasSufficientMemory
	  Normal   CIDRAssignmentFailed     6m41s                  cidrAllocator    Node ha-093284-m04 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           6m39s                  node-controller  Node ha-093284-m04 event: Registered Node ha-093284-m04 in Controller
	  Normal   RegisteredNode           6m39s                  node-controller  Node ha-093284-m04 event: Registered Node ha-093284-m04 in Controller
	  Normal   RegisteredNode           6m38s                  node-controller  Node ha-093284-m04 event: Registered Node ha-093284-m04 in Controller
	  Normal   NodeReady                6m26s                  kubelet          Node ha-093284-m04 status is now: NodeReady
	  Normal   RegisteredNode           5m20s                  node-controller  Node ha-093284-m04 event: Registered Node ha-093284-m04 in Controller
	  Normal   RegisteredNode           4m5s                   node-controller  Node ha-093284-m04 event: Registered Node ha-093284-m04 in Controller
	  Normal   RegisteredNode           4m                     node-controller  Node ha-093284-m04 event: Registered Node ha-093284-m04 in Controller
	  Normal   NodeNotReady             3m25s                  node-controller  Node ha-093284-m04 status is now: NodeNotReady
	  Warning  CgroupV1                 3m19s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 3m19s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           3m15s                  node-controller  Node ha-093284-m04 event: Registered Node ha-093284-m04 in Controller
	  Normal   NodeHasSufficientPID     3m13s (x7 over 3m19s)  kubelet          Node ha-093284-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  3m6s (x8 over 3m19s)   kubelet          Node ha-093284-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m6s (x8 over 3m19s)   kubelet          Node ha-093284-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           86s                    node-controller  Node ha-093284-m04 event: Registered Node ha-093284-m04 in Controller
	  Normal   NodeNotReady             46s                    node-controller  Node ha-093284-m04 status is now: NodeNotReady
	  Normal   Starting                 41s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 41s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           35s                    node-controller  Node ha-093284-m04 event: Registered Node ha-093284-m04 in Controller
	  Normal   NodeHasSufficientPID     34s (x7 over 41s)      kubelet          Node ha-093284-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  28s (x8 over 41s)      kubelet          Node ha-093284-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    28s (x8 over 41s)      kubelet          Node ha-093284-m04 status is now: NodeHasNoDiskPressure
	
	
	==> dmesg <==
	[  +0.000001] ll header: 00000000: 02 42 38 d0 1e 4e 02 42 c0 a8 31 02 08 00
	[  +2.015757] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-236a02a97ab3
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-236a02a97ab3
	[  +0.000002] ll header: 00000000: 02 42 38 d0 1e 4e 02 42 c0 a8 31 02 08 00
	[  +0.000004] ll header: 00000000: 02 42 38 d0 1e 4e 02 42 c0 a8 31 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-236a02a97ab3
	[  +0.000001] ll header: 00000000: 02 42 38 d0 1e 4e 02 42 c0 a8 31 02 08 00
	[  +1.962237] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethee9eeefd
	[  +1.999928] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethee9eeefd
	[  +0.000035] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethee9eeefd
	[  +0.225322] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-236a02a97ab3
	[  +0.000007] ll header: 00000000: 02 42 38 d0 1e 4e 02 42 c0 a8 31 02 08 00
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-236a02a97ab3
	[  +0.000005] ll header: 00000000: 02 42 38 d0 1e 4e 02 42 c0 a8 31 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-236a02a97ab3
	[  +0.000001] ll header: 00000000: 02 42 38 d0 1e 4e 02 42 c0 a8 31 02 08 00
	[  +0.774562] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethee9eeefd
	[  +0.500797] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethee9eeefd
	[  +0.499212] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethee9eeefd
	[  +0.999954] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethee9eeefd
	[  +0.002360] IPv4: martian destination 127.0.0.11 from 10.244.0.2, dev vethee9eeefd
	[  +5.414255] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-236a02a97ab3
	[  +0.000027] ll header: 00000000: 02 42 38 d0 1e 4e 02 42 c0 a8 31 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-236a02a97ab3
	[  +0.000003] ll header: 00000000: 02 42 38 d0 1e 4e 02 42 c0 a8 31 02 08 00
	
	
	==> etcd [c82502e5a5109f5c048c0623bdeb707c32805c9bb8680f3945d69ee5f265745c] <==
	{"level":"info","ts":"2024-12-02T11:49:46.041769Z","caller":"traceutil/trace.go:171","msg":"trace[1064417978] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:2211; }","duration":"3.223271757s","start":"2024-12-02T11:49:42.818491Z","end":"2024-12-02T11:49:46.041763Z","steps":["trace[1064417978] 'agreement among raft nodes before linearized reading'  (duration: 3.223236358s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:49:46.041774Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:49:42.823647Z","time spent":"3.218120805s","remote":"127.0.0.1:50630","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":4,"response size":1407,"request content":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" limit:10000 "}
	{"level":"warn","ts":"2024-12-02T11:49:46.041788Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:49:42.818465Z","time spent":"3.223317312s","remote":"127.0.0.1:50740","response type":"/etcdserverpb.KV/Range","request count":0,"request size":77,"response count":0,"response size":29,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" limit:10000 "}
	{"level":"warn","ts":"2024-12-02T11:49:46.041861Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:49:43.048234Z","time spent":"2.993618092s","remote":"127.0.0.1:50630","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":4,"response size":1407,"request content":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" limit:500 "}
	{"level":"warn","ts":"2024-12-02T11:49:46.041950Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.225553488s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-12-02T11:49:46.041959Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:49:42.818802Z","time spent":"3.223147242s","remote":"127.0.0.1:50890","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":55,"response size":39217,"request content":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" limit:10000 "}
	{"level":"info","ts":"2024-12-02T11:49:46.041978Z","caller":"traceutil/trace.go:171","msg":"trace[1781126133] range","detail":"{range_begin:/registry/jobs/; range_end:/registry/jobs0; response_count:0; response_revision:2211; }","duration":"3.225583254s","start":"2024-12-02T11:49:42.816388Z","end":"2024-12-02T11:49:46.041972Z","steps":["trace[1781126133] 'agreement among raft nodes before linearized reading'  (duration: 3.225539089s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:49:46.041995Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.224690403s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" limit:10000 ","response":"range_response_count:29 size:155015"}
	{"level":"warn","ts":"2024-12-02T11:49:46.041997Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:49:42.816353Z","time spent":"3.225638199s","remote":"127.0.0.1:50762","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":0,"response size":29,"request content":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:10000 "}
	{"level":"info","ts":"2024-12-02T11:49:46.042041Z","caller":"traceutil/trace.go:171","msg":"trace[1316828141] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; response_count:29; response_revision:2211; }","duration":"3.224737845s","start":"2024-12-02T11:49:42.817291Z","end":"2024-12-02T11:49:46.042029Z","steps":["trace[1316828141] 'agreement among raft nodes before linearized reading'  (duration: 3.224435962s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:49:46.042069Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:49:42.817271Z","time spent":"3.224789101s","remote":"127.0.0.1:50696","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":29,"response size":155039,"request content":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" limit:10000 "}
	{"level":"warn","ts":"2024-12-02T11:49:46.041564Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:49:42.652465Z","time spent":"3.389087699s","remote":"127.0.0.1:50882","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":67,"response size":60579,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" limit:500 "}
	{"level":"warn","ts":"2024-12-02T11:49:46.042219Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:49:42.822524Z","time spent":"3.219682442s","remote":"127.0.0.1:51038","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":2,"response size":5925,"request content":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" limit:10000 "}
	{"level":"warn","ts":"2024-12-02T11:49:46.040678Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:49:42.949973Z","time spent":"3.090699471s","remote":"127.0.0.1:50932","response type":"/etcdserverpb.KV/Range","request count":0,"request size":63,"response count":0,"response size":29,"request content":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" limit:500 "}
	{"level":"warn","ts":"2024-12-02T11:49:46.042447Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.226054342s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:10000 ","response":"range_response_count:3 size:18509"}
	{"level":"info","ts":"2024-12-02T11:49:46.042477Z","caller":"traceutil/trace.go:171","msg":"trace[692973163] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:3; response_revision:2211; }","duration":"3.226087126s","start":"2024-12-02T11:49:42.816382Z","end":"2024-12-02T11:49:46.042469Z","steps":["trace[692973163] 'agreement among raft nodes before linearized reading'  (duration: 3.225980593s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:49:46.042502Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:49:42.816346Z","time spent":"3.226148584s","remote":"127.0.0.1:50694","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":3,"response size":18533,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:10000 "}
	{"level":"warn","ts":"2024-12-02T11:49:46.042797Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.221514205s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-02T11:49:46.042846Z","caller":"traceutil/trace.go:171","msg":"trace[937544475] range","detail":"{range_begin:/registry/persistentvolumeclaims/; range_end:/registry/persistentvolumeclaims0; response_count:0; response_revision:2211; }","duration":"3.221583913s","start":"2024-12-02T11:49:42.821244Z","end":"2024-12-02T11:49:46.042828Z","steps":["trace[937544475] 'agreement among raft nodes before linearized reading'  (duration: 3.221480539s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:49:46.042874Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:49:42.821218Z","time spent":"3.221647193s","remote":"127.0.0.1:50684","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":0,"response size":29,"request content":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" limit:10000 "}
	{"level":"warn","ts":"2024-12-02T11:49:46.043150Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"3.263353653s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:500 ","response":"range_response_count:3 size:18509"}
	{"level":"info","ts":"2024-12-02T11:49:46.043248Z","caller":"traceutil/trace.go:171","msg":"trace[1122125909] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:3; response_revision:2211; }","duration":"3.263455057s","start":"2024-12-02T11:49:42.779774Z","end":"2024-12-02T11:49:46.043229Z","steps":["trace[1122125909] 'agreement among raft nodes before linearized reading'  (duration: 3.263288645s)"],"step_count":1}
	{"level":"warn","ts":"2024-12-02T11:49:46.043309Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:49:42.779738Z","time spent":"3.263559915s","remote":"127.0.0.1:50694","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":3,"response size":18533,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:500 "}
	{"level":"warn","ts":"2024-12-02T11:49:46.910066Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5d0b59096816432e","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-12-02T11:49:46.910116Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5d0b59096816432e","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	
	
	==> kernel <==
	 11:51:18 up 33 min,  0 users,  load average: 1.31, 1.27, 0.88
	Linux ha-093284 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [bbc84830793d8262e4a6609da38d2756d78670922cd6ef5364614afa72142c7c] <==
	I1202 11:50:35.504009       1 main.go:324] Node ha-093284-m04 has CIDR [10.244.3.0/24] 
	I1202 11:50:45.501465       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 11:50:45.501539       1 main.go:324] Node ha-093284-m02 has CIDR [10.244.1.0/24] 
	I1202 11:50:45.501711       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 11:50:45.501721       1 main.go:324] Node ha-093284-m04 has CIDR [10.244.3.0/24] 
	I1202 11:50:45.501793       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:50:45.501802       1 main.go:301] handling current node
	I1202 11:50:55.501437       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 11:50:55.501471       1 main.go:324] Node ha-093284-m04 has CIDR [10.244.3.0/24] 
	I1202 11:50:55.501642       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:50:55.501652       1 main.go:301] handling current node
	I1202 11:50:55.501663       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 11:50:55.501667       1 main.go:324] Node ha-093284-m02 has CIDR [10.244.1.0/24] 
	I1202 11:51:05.506935       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:51:05.506970       1 main.go:301] handling current node
	I1202 11:51:05.506984       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 11:51:05.506989       1 main.go:324] Node ha-093284-m02 has CIDR [10.244.1.0/24] 
	I1202 11:51:05.507164       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 11:51:05.507176       1 main.go:324] Node ha-093284-m04 has CIDR [10.244.3.0/24] 
	I1202 11:51:15.501095       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1202 11:51:15.501150       1 main.go:301] handling current node
	I1202 11:51:15.501163       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1202 11:51:15.501169       1 main.go:324] Node ha-093284-m02 has CIDR [10.244.1.0/24] 
	I1202 11:51:15.501366       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1202 11:51:15.501378       1 main.go:324] Node ha-093284-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [1a649cec3a74dd383932bd0f752d5d3e85cd5fc65a9427a7dd0dff833b390053] <==
	I1202 11:50:30.801341       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1202 11:50:30.801434       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 11:50:30.817662       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I1202 11:50:30.817762       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I1202 11:50:30.903865       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1202 11:50:30.908728       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1202 11:50:30.908782       1 policy_source.go:224] refreshing policies
	I1202 11:50:30.911057       1 shared_informer.go:320] Caches are synced for configmaps
	I1202 11:50:30.917826       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1202 11:50:30.918280       1 aggregator.go:171] initial CRD sync complete...
	I1202 11:50:30.918308       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 11:50:30.918316       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 11:50:30.918324       1 cache.go:39] Caches are synced for autoregister controller
	I1202 11:50:31.000521       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 11:50:31.000813       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 11:50:31.000893       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 11:50:31.001049       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1202 11:50:31.001077       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 11:50:31.000837       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 11:50:31.001809       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1202 11:50:31.007878       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1202 11:50:31.749413       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1202 11:50:32.019712       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1202 11:50:32.021193       1 controller.go:615] quota admission added evaluator for: endpoints
	I1202 11:50:32.027432       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [edf20a69b72e4da9bead4301312e79b2bcf10d69c68d8072bb071946d90f4a18] <==
	E1202 11:49:41.824762       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RoleBinding: failed to list *v1.RoleBinding: etcdserver: request timed out" logger="UnhandledError"
	E1202 11:49:41.824763       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ResourceQuota: failed to list *v1.ResourceQuota: etcdserver: request timed out" logger="UnhandledError"
	I1202 11:49:46.070975       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1202 11:49:46.102646       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1202 11:49:46.110573       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1202 11:49:46.145583       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1202 11:49:46.153616       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 11:49:46.153639       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 11:49:46.153732       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1202 11:49:46.153758       1 policy_source.go:224] refreshing policies
	I1202 11:49:46.153855       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 11:49:46.154134       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 11:49:46.154148       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1202 11:49:46.154548       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1202 11:49:46.154583       1 aggregator.go:171] initial CRD sync complete...
	I1202 11:49:46.154597       1 autoregister_controller.go:144] Starting autoregister controller
	I1202 11:49:46.154605       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1202 11:49:46.154612       1 cache.go:39] Caches are synced for autoregister controller
	I1202 11:49:46.159284       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1202 11:49:46.171440       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 11:49:46.212947       1 controller.go:615] quota admission added evaluator for: endpoints
	I1202 11:49:46.219079       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1202 11:49:46.221145       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1202 11:49:47.055124       1 shared_informer.go:320] Caches are synced for configmaps
	F1202 11:50:28.754191       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [8eea79962cb8ae9f80f3abc82ad469573b64fe15eb697b9c6db0dca142401171] <==
	I1202 11:50:10.249516       1 serving.go:386] Generated self-signed cert in-memory
	I1202 11:50:10.587342       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1202 11:50:10.587370       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 11:50:10.588773       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1202 11:50:10.588894       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1202 11:50:10.588944       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1202 11:50:10.588985       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 11:50:20.598215       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [948983526a98b84dd4f01bf5a95deaaae98d812687416365d0d8779294effd0e] <==
	I1202 11:50:44.106549       1 shared_informer.go:320] Caches are synced for resource quota
	I1202 11:50:44.139728       1 shared_informer.go:320] Caches are synced for resource quota
	I1202 11:50:44.550724       1 shared_informer.go:320] Caches are synced for garbage collector
	I1202 11:50:44.613513       1 shared_informer.go:320] Caches are synced for garbage collector
	I1202 11:50:44.613546       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 11:50:50.449574       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-093284-m04"
	I1202 11:50:50.450007       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-093284-m04"
	I1202 11:50:50.459898       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-093284-m04"
	I1202 11:50:52.105954       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-093284-m04"
	I1202 11:50:56.689855       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.925µs"
	I1202 11:50:57.782476       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="26.788598ms"
	I1202 11:50:57.782598       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="72.885µs"
	I1202 11:51:12.117035       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-093284"
	I1202 11:51:12.117039       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-093284-m04"
	I1202 11:51:12.128835       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-093284"
	I1202 11:51:12.239497       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.305303ms"
	I1202 11:51:12.239593       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.12µs"
	I1202 11:51:12.241940       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.967195ms"
	I1202 11:51:12.242076       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="86.871µs"
	I1202 11:51:12.250444       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-kpcfv EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-kpcfv\": the object has been modified; please apply your changes to the latest version and try again"
	I1202 11:51:12.250617       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"9c6d9423-97dd-445f-8b19-836d619e801c", APIVersion:"v1", ResourceVersion:"242", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-kpcfv EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-kpcfv": the object has been modified; please apply your changes to the latest version and try again
	I1202 11:51:14.010786       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-093284"
	I1202 11:51:16.092012       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="9.969614ms"
	I1202 11:51:16.092557       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="103.681µs"
	I1202 11:51:17.265243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-093284"
	
	
	==> kube-proxy [5283f288249845f0899e7e24af0af026a0cc709819ddc24d42ec03e4a921b465] <==
	I1202 11:50:14.018226       1 server_linux.go:66] "Using iptables proxy"
	I1202 11:50:14.132765       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1202 11:50:14.132831       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 11:50:14.152977       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1202 11:50:14.153034       1 server_linux.go:169] "Using iptables Proxier"
	I1202 11:50:14.155057       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 11:50:14.155488       1 server.go:483] "Version info" version="v1.31.2"
	I1202 11:50:14.155524       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 11:50:14.156777       1 config.go:105] "Starting endpoint slice config controller"
	I1202 11:50:14.156819       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1202 11:50:14.156849       1 config.go:199] "Starting service config controller"
	I1202 11:50:14.156865       1 config.go:328] "Starting node config controller"
	I1202 11:50:14.156870       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1202 11:50:14.156874       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1202 11:50:14.256938       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1202 11:50:14.256979       1 shared_informer.go:320] Caches are synced for node config
	I1202 11:50:14.257035       1 shared_informer.go:320] Caches are synced for service config
	W1202 11:51:16.781605       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W1202 11:51:16.781605       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W1202 11:51:16.781663       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-093284&resourceVersion=2529": http2: client connection lost
	E1202 11:51:16.781724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-093284&resourceVersion=2529\": http2: client connection lost" logger="UnhandledError"
	
	
	==> kube-scheduler [d7814a4d94ddc3097887995df3862e81f6327d5e9ee322c77406d2c50fd7c0da] <==
	W1202 11:49:36.868744       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1202 11:49:36.868784       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:49:37.251874       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1202 11:49:37.251916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:49:37.409831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1202 11:49:37.409914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:49:37.467894       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1202 11:49:37.467935       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:49:37.570875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1202 11:49:37.570923       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:49:37.647915       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1202 11:49:37.647967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:49:37.904690       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1202 11:49:37.904731       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1202 11:49:38.619527       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1202 11:49:38.619574       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:49:44.601236       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1202 11:49:44.601281       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1202 11:49:45.018093       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1202 11:49:45.018161       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1202 11:49:45.396153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1202 11:49:45.396194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1202 11:49:45.673678       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1202 11:49:45.673718       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1202 11:50:00.229030       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 02 11:50:58 ha-093284 kubelet[843]: E1202 11:50:58.661357     843 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-093284?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 02 11:50:59 ha-093284 kubelet[843]: E1202 11:50:59.945519     843 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140259945339238,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157330,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:50:59 ha-093284 kubelet[843]: E1202 11:50:59.945560     843 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140259945339238,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157330,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:08 ha-093284 kubelet[843]: E1202 11:51:08.661881     843 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-093284?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Dec 02 11:51:09 ha-093284 kubelet[843]: E1202 11:51:09.946795     843 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140269946578135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157330,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:09 ha-093284 kubelet[843]: E1202 11:51:09.946835     843 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733140269946578135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:157330,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 02 11:51:16 ha-093284 kubelet[843]: E1202 11:51:16.025494     843 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-093284?timeout=10s\": http2: client connection lost"
	Dec 02 11:51:16 ha-093284 kubelet[843]: W1202 11:51:16.025511     843 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2211": http2: client connection lost
	Dec 02 11:51:16 ha-093284 kubelet[843]: W1202 11:51:16.025668     843 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2482": http2: client connection lost
	Dec 02 11:51:16 ha-093284 kubelet[843]: I1202 11:51:16.025513     843 status_manager.go:851] "Failed to get status for pod" podUID="0a077e8d3c7ff74fdbd182aa90b65daf" pod="kube-system/kube-vip-ha-093284" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-093284\": http2: client connection lost"
	Dec 02 11:51:16 ha-093284 kubelet[843]: E1202 11:51:16.025718     843 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2211\": http2: client connection lost" logger="UnhandledError"
	Dec 02 11:51:16 ha-093284 kubelet[843]: W1202 11:51:16.025538     843 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2211": http2: client connection lost
	Dec 02 11:51:16 ha-093284 kubelet[843]: E1202 11:51:16.025757     843 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2211\": http2: client connection lost" logger="UnhandledError"
	Dec 02 11:51:16 ha-093284 kubelet[843]: W1202 11:51:16.025566     843 reflector.go:561] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-093284&resourceVersion=2580": http2: client connection lost
	Dec 02 11:51:16 ha-093284 kubelet[843]: E1202 11:51:16.025801     843 reflector.go:158] "Unhandled Error" err="pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-093284&resourceVersion=2580\": http2: client connection lost" logger="UnhandledError"
	Dec 02 11:51:16 ha-093284 kubelet[843]: W1202 11:51:16.025552     843 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-093284&resourceVersion=2478": http2: client connection lost
	Dec 02 11:51:16 ha-093284 kubelet[843]: E1202 11:51:16.025495     843 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-093284.180d59cc7b7384de\": http2: client connection lost" event="&Event{ObjectMeta:{kube-apiserver-ha-093284.180d59cc7b7384de  kube-system   2409 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-093284,UID:0ebdf46e6ad6a26c2c66e98f73172a38,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.31.2\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-093284,},FirstTimestamp:2024-12-02 11:49:26 +0000 UTC,LastTimestamp:2024-12-02 11:50:29.086865136 +0000 UTC m=+69.246243378,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Act
ion:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-093284,}"
	Dec 02 11:51:16 ha-093284 kubelet[843]: E1202 11:51:16.025834     843 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-093284&resourceVersion=2478\": http2: client connection lost" logger="UnhandledError"
	Dec 02 11:51:16 ha-093284 kubelet[843]: W1202 11:51:16.025595     843 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2211": http2: client connection lost
	Dec 02 11:51:16 ha-093284 kubelet[843]: E1202 11:51:16.025729     843 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2482\": http2: client connection lost" logger="UnhandledError"
	Dec 02 11:51:16 ha-093284 kubelet[843]: E1202 11:51:16.025867     843 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2211\": http2: client connection lost" logger="UnhandledError"
	Dec 02 11:51:16 ha-093284 kubelet[843]: W1202 11:51:16.025584     843 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2474": http2: client connection lost
	Dec 02 11:51:16 ha-093284 kubelet[843]: E1202 11:51:16.025902     843 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2474\": http2: client connection lost" logger="UnhandledError"
	Dec 02 11:51:16 ha-093284 kubelet[843]: W1202 11:51:16.025621     843 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=2211": http2: client connection lost
	Dec 02 11:51:16 ha-093284 kubelet[843]: E1202 11:51:16.025954     843 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=2211\": http2: client connection lost" logger="UnhandledError"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-093284 -n ha-093284
helpers_test.go:261: (dbg) Run:  kubectl --context ha-093284 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (125.99s)

                                                
                                    

Test pass (301/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.34
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.2/json-events 5.35
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.06
18 TestDownloadOnly/v1.31.2/DeleteAll 0.21
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.07
21 TestBinaryMirror 0.75
22 TestOffline 60.67
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 156.27
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 7.45
35 TestAddons/parallel/Registry 15.48
37 TestAddons/parallel/InspektorGadget 11.64
40 TestAddons/parallel/CSI 54.85
41 TestAddons/parallel/Headlamp 16.4
42 TestAddons/parallel/CloudSpanner 5.47
43 TestAddons/parallel/LocalPath 8.09
44 TestAddons/parallel/NvidiaDevicePlugin 5.46
45 TestAddons/parallel/Yakd 11.63
46 TestAddons/parallel/AmdGpuDevicePlugin 5.46
47 TestAddons/StoppedEnableDisable 12.06
48 TestCertOptions 30.82
49 TestCertExpiration 221.96
51 TestForceSystemdFlag 24.72
52 TestForceSystemdEnv 41.9
54 TestKVMDriverInstallOrUpdate 3.52
58 TestErrorSpam/setup 20.41
59 TestErrorSpam/start 0.56
60 TestErrorSpam/status 0.87
61 TestErrorSpam/pause 1.51
62 TestErrorSpam/unpause 1.56
63 TestErrorSpam/stop 1.35
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 42.85
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 28.04
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.12
75 TestFunctional/serial/CacheCmd/cache/add_local 1.32
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 39.12
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.36
86 TestFunctional/serial/LogsFileCmd 1.36
87 TestFunctional/serial/InvalidService 4.3
89 TestFunctional/parallel/ConfigCmd 0.37
90 TestFunctional/parallel/DashboardCmd 10.29
91 TestFunctional/parallel/DryRun 0.39
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 0.9
97 TestFunctional/parallel/ServiceCmdConnect 20.56
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 32.31
101 TestFunctional/parallel/SSHCmd 0.52
102 TestFunctional/parallel/CpCmd 1.57
103 TestFunctional/parallel/MySQL 19.74
104 TestFunctional/parallel/FileSync 0.33
105 TestFunctional/parallel/CertSync 1.77
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.54
113 TestFunctional/parallel/License 0.22
114 TestFunctional/parallel/ServiceCmd/DeployApp 19.18
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
116 TestFunctional/parallel/ProfileCmd/profile_list 0.36
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
118 TestFunctional/parallel/MountCmd/any-port 15.08
119 TestFunctional/parallel/MountCmd/specific-port 1.93
120 TestFunctional/parallel/MountCmd/VerifyCleanup 0.91
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.39
123 TestFunctional/parallel/ServiceCmd/List 0.51
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.23
127 TestFunctional/parallel/Version/short 0.06
128 TestFunctional/parallel/Version/components 0.69
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.89
130 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
131 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
132 TestFunctional/parallel/ImageCommands/ImageListJson 0.35
133 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
134 TestFunctional/parallel/ImageCommands/ImageBuild 2.5
135 TestFunctional/parallel/ImageCommands/Setup 0.93
136 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
137 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.47
138 TestFunctional/parallel/ServiceCmd/Format 0.6
139 TestFunctional/parallel/ServiceCmd/URL 0.72
140 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
141 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
142 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.97
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.26
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.67
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.73
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.54
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 99.71
162 TestMultiControlPlane/serial/DeployApp 4.26
163 TestMultiControlPlane/serial/PingHostFromPods 1.03
164 TestMultiControlPlane/serial/AddWorkerNode 32.21
165 TestMultiControlPlane/serial/NodeLabels 0.06
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.83
167 TestMultiControlPlane/serial/CopyFile 15.81
168 TestMultiControlPlane/serial/StopSecondaryNode 12.49
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
170 TestMultiControlPlane/serial/RestartSecondaryNode 25.13
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.97
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 150.96
173 TestMultiControlPlane/serial/DeleteSecondaryNode 11.32
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
175 TestMultiControlPlane/serial/StopCluster 35.42
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
178 TestMultiControlPlane/serial/AddSecondaryNode 37.74
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
183 TestJSONOutput/start/Command 42.47
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.67
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.59
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.75
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
208 TestKicCustomNetwork/create_custom_network 29.17
209 TestKicCustomNetwork/use_default_bridge_network 23.02
210 TestKicExistingNetwork 26.6
211 TestKicCustomSubnet 26.52
212 TestKicStaticIP 26.17
213 TestMainNoArgs 0.05
214 TestMinikubeProfile 53.44
217 TestMountStart/serial/StartWithMountFirst 8.14
218 TestMountStart/serial/VerifyMountFirst 0.24
219 TestMountStart/serial/StartWithMountSecond 5.26
220 TestMountStart/serial/VerifyMountSecond 0.24
221 TestMountStart/serial/DeleteFirst 1.59
222 TestMountStart/serial/VerifyMountPostDelete 0.24
223 TestMountStart/serial/Stop 1.18
224 TestMountStart/serial/RestartStopped 7.29
225 TestMountStart/serial/VerifyMountPostStop 0.24
228 TestMultiNode/serial/FreshStart2Nodes 72.29
229 TestMultiNode/serial/DeployApp2Nodes 3.03
230 TestMultiNode/serial/PingHostFrom2Pods 0.74
231 TestMultiNode/serial/AddNode 27.81
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.61
234 TestMultiNode/serial/CopyFile 9.06
235 TestMultiNode/serial/StopNode 2.09
236 TestMultiNode/serial/StartAfterStop 8.93
237 TestMultiNode/serial/RestartKeepsNodes 78.26
238 TestMultiNode/serial/DeleteNode 5
239 TestMultiNode/serial/StopMultiNode 23.73
240 TestMultiNode/serial/RestartMultiNode 52.07
241 TestMultiNode/serial/ValidateNameConflict 25.76
246 TestPreload 103.97
248 TestScheduledStopUnix 96.27
251 TestInsufficientStorage 9.99
252 TestRunningBinaryUpgrade 57.39
254 TestKubernetesUpgrade 346.87
255 TestMissingContainerUpgrade 130.55
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
258 TestNoKubernetes/serial/StartWithK8s 38.51
259 TestNoKubernetes/serial/StartWithStopK8s 18.38
260 TestNoKubernetes/serial/Start 5.98
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
262 TestNoKubernetes/serial/ProfileList 6.24
263 TestNoKubernetes/serial/Stop 2.77
264 TestNoKubernetes/serial/StartNoArgs 6.88
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
273 TestStoppedBinaryUpgrade/Setup 0.77
274 TestStoppedBinaryUpgrade/Upgrade 66.18
275 TestStoppedBinaryUpgrade/MinikubeLogs 0.8
277 TestPause/serial/Start 45.75
285 TestNetworkPlugins/group/false 3.33
290 TestStartStop/group/old-k8s-version/serial/FirstStart 122.89
291 TestPause/serial/SecondStartNoReconfiguration 39.73
292 TestPause/serial/Pause 0.7
293 TestPause/serial/VerifyStatus 0.3
294 TestPause/serial/Unpause 0.61
295 TestPause/serial/PauseAgain 0.71
296 TestPause/serial/DeletePaused 2.63
297 TestPause/serial/VerifyDeletedResources 14.95
299 TestStartStop/group/no-preload/serial/FirstStart 58.17
301 TestStartStop/group/embed-certs/serial/FirstStart 42.97
302 TestStartStop/group/old-k8s-version/serial/DeployApp 8.38
303 TestStartStop/group/embed-certs/serial/DeployApp 8.27
304 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.87
305 TestStartStop/group/no-preload/serial/DeployApp 9.27
306 TestStartStop/group/old-k8s-version/serial/Stop 11.97
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
308 TestStartStop/group/embed-certs/serial/Stop 11.85
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.83
310 TestStartStop/group/no-preload/serial/Stop 13.67
311 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
312 TestStartStop/group/old-k8s-version/serial/SecondStart 124.16
313 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
314 TestStartStop/group/embed-certs/serial/SecondStart 267.57
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
316 TestStartStop/group/no-preload/serial/SecondStart 263.73
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 44.17
319 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.24
321 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.91
323 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
324 TestStartStop/group/old-k8s-version/serial/Pause 2.7
325 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.9
327 TestStartStop/group/newest-cni/serial/FirstStart 28.51
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 297.81
330 TestStartStop/group/newest-cni/serial/DeployApp 0
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.14
332 TestStartStop/group/newest-cni/serial/Stop 2.1
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
334 TestStartStop/group/newest-cni/serial/SecondStart 13.55
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
338 TestStartStop/group/newest-cni/serial/Pause 2.88
339 TestNetworkPlugins/group/auto/Start 42.72
340 TestNetworkPlugins/group/auto/KubeletFlags 0.26
341 TestNetworkPlugins/group/auto/NetCatPod 9.19
342 TestNetworkPlugins/group/auto/DNS 0.12
343 TestNetworkPlugins/group/auto/Localhost 0.11
344 TestNetworkPlugins/group/auto/HairPin 0.1
345 TestNetworkPlugins/group/kindnet/Start 45.98
346 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
347 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
348 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
349 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
350 TestStartStop/group/embed-certs/serial/Pause 2.94
351 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.09
352 TestNetworkPlugins/group/calico/Start 53.1
353 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
354 TestStartStop/group/no-preload/serial/Pause 3.11
355 TestNetworkPlugins/group/custom-flannel/Start 43.58
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
358 TestNetworkPlugins/group/kindnet/NetCatPod 10.21
359 TestNetworkPlugins/group/kindnet/DNS 0.14
360 TestNetworkPlugins/group/kindnet/Localhost 0.12
361 TestNetworkPlugins/group/kindnet/HairPin 0.13
362 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
363 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.2
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.28
366 TestNetworkPlugins/group/calico/NetCatPod 10.19
367 TestNetworkPlugins/group/enable-default-cni/Start 38.8
368 TestNetworkPlugins/group/custom-flannel/DNS 0.15
369 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
370 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
371 TestNetworkPlugins/group/calico/DNS 0.15
372 TestNetworkPlugins/group/calico/Localhost 0.13
373 TestNetworkPlugins/group/calico/HairPin 0.12
374 TestNetworkPlugins/group/flannel/Start 52.02
375 TestNetworkPlugins/group/bridge/Start 61.12
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.2
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
381 TestNetworkPlugins/group/flannel/ControllerPod 6.01
382 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
383 TestNetworkPlugins/group/flannel/NetCatPod 10.17
384 TestNetworkPlugins/group/flannel/DNS 0.12
385 TestNetworkPlugins/group/flannel/Localhost 0.1
386 TestNetworkPlugins/group/flannel/HairPin 0.11
387 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
388 TestNetworkPlugins/group/bridge/NetCatPod 11.18
389 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
390 TestNetworkPlugins/group/bridge/DNS 0.14
391 TestNetworkPlugins/group/bridge/Localhost 0.11
392 TestNetworkPlugins/group/bridge/HairPin 0.12
393 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
394 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
395 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.77
x
+
TestDownloadOnly/v1.20.0/json-events (6.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-348557 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-348557 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.337960176s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1202 11:30:30.101758   13299 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1202 11:30:30.101857   13299 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-348557
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-348557: exit status 85 (66.84422ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-348557 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC |          |
	|         | -p download-only-348557        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 11:30:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 11:30:23.804675   13311 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:30:23.804763   13311 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:30:23.804767   13311 out.go:358] Setting ErrFile to fd 2...
	I1202 11:30:23.804772   13311 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:30:23.804934   13311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6540/.minikube/bin
	W1202 11:30:23.805088   13311 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20033-6540/.minikube/config/config.json: open /home/jenkins/minikube-integration/20033-6540/.minikube/config/config.json: no such file or directory
	I1202 11:30:23.805643   13311 out.go:352] Setting JSON to true
	I1202 11:30:23.806487   13311 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":775,"bootTime":1733138249,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 11:30:23.806584   13311 start.go:139] virtualization: kvm guest
	I1202 11:30:23.809182   13311 out.go:97] [download-only-348557] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1202 11:30:23.809308   13311 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20033-6540/.minikube/cache/preloaded-tarball: no such file or directory
	I1202 11:30:23.809353   13311 notify.go:220] Checking for updates...
	I1202 11:30:23.810820   13311 out.go:169] MINIKUBE_LOCATION=20033
	I1202 11:30:23.812295   13311 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 11:30:23.813626   13311 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20033-6540/kubeconfig
	I1202 11:30:23.814884   13311 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6540/.minikube
	I1202 11:30:23.816150   13311 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1202 11:30:23.818588   13311 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 11:30:23.818778   13311 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 11:30:23.840612   13311 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1202 11:30:23.840689   13311 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 11:30:24.203676   13311 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2024-12-02 11:30:24.194772975 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 11:30:24.203779   13311 docker.go:318] overlay module found
	I1202 11:30:24.205605   13311 out.go:97] Using the docker driver based on user configuration
	I1202 11:30:24.205630   13311 start.go:297] selected driver: docker
	I1202 11:30:24.205637   13311 start.go:901] validating driver "docker" against <nil>
	I1202 11:30:24.205750   13311 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 11:30:24.254570   13311 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2024-12-02 11:30:24.246416975 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 11:30:24.254778   13311 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1202 11:30:24.255546   13311 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1202 11:30:24.255787   13311 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 11:30:24.257759   13311 out.go:169] Using Docker driver with root privileges
	I1202 11:30:24.259112   13311 cni.go:84] Creating CNI manager for ""
	I1202 11:30:24.259172   13311 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 11:30:24.259183   13311 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 11:30:24.259259   13311 start.go:340] cluster config:
	{Name:download-only-348557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-348557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:30:24.260582   13311 out.go:97] Starting "download-only-348557" primary control-plane node in "download-only-348557" cluster
	I1202 11:30:24.260598   13311 cache.go:121] Beginning downloading kic base image for docker with crio
	I1202 11:30:24.261618   13311 out.go:97] Pulling base image v0.0.45-1730888964-19917 ...
	I1202 11:30:24.261639   13311 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1202 11:30:24.261678   13311 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1202 11:30:24.278133   13311 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1202 11:30:24.278271   13311 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1202 11:30:24.278355   13311 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1202 11:30:24.302092   13311 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1202 11:30:24.302121   13311 cache.go:56] Caching tarball of preloaded images
	I1202 11:30:24.302283   13311 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1202 11:30:24.304438   13311 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1202 11:30:24.304465   13311 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1202 11:30:24.341255   13311 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20033-6540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1202 11:30:27.514539   13311 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1202 11:30:28.603118   13311 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1202 11:30:28.603226   13311 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20033-6540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-348557 host does not exist
	  To start a cluster, run: "minikube start -p download-only-348557"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-348557
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (5.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-386345 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-386345 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.35074389s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (5.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1202 11:30:35.864590   13299 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1202 11:30:35.864659   13299 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-386345
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-386345: exit status 85 (63.476861ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-348557 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC |                     |
	|         | -p download-only-348557        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
	| delete  | -p download-only-348557        | download-only-348557 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
	| start   | -o=json --download-only        | download-only-386345 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC |                     |
	|         | -p download-only-386345        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/02 11:30:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 11:30:30.556152   13676 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:30:30.556310   13676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:30:30.556322   13676 out.go:358] Setting ErrFile to fd 2...
	I1202 11:30:30.556327   13676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:30:30.556566   13676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6540/.minikube/bin
	I1202 11:30:30.557177   13676 out.go:352] Setting JSON to true
	I1202 11:30:30.558058   13676 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":782,"bootTime":1733138249,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 11:30:30.558156   13676 start.go:139] virtualization: kvm guest
	I1202 11:30:30.560310   13676 out.go:97] [download-only-386345] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 11:30:30.560451   13676 notify.go:220] Checking for updates...
	I1202 11:30:30.561962   13676 out.go:169] MINIKUBE_LOCATION=20033
	I1202 11:30:30.563255   13676 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 11:30:30.564646   13676 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20033-6540/kubeconfig
	I1202 11:30:30.565872   13676 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6540/.minikube
	I1202 11:30:30.567087   13676 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1202 11:30:30.569456   13676 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 11:30:30.569682   13676 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 11:30:30.591006   13676 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1202 11:30:30.591117   13676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 11:30:30.636923   13676 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-12-02 11:30:30.627833461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 11:30:30.637026   13676 docker.go:318] overlay module found
	I1202 11:30:30.638868   13676 out.go:97] Using the docker driver based on user configuration
	I1202 11:30:30.638891   13676 start.go:297] selected driver: docker
	I1202 11:30:30.638896   13676 start.go:901] validating driver "docker" against <nil>
	I1202 11:30:30.639013   13676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 11:30:30.683077   13676 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-12-02 11:30:30.674797427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 11:30:30.683246   13676 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1202 11:30:30.683730   13676 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1202 11:30:30.683864   13676 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 11:30:30.685888   13676 out.go:169] Using Docker driver with root privileges
	I1202 11:30:30.687181   13676 cni.go:84] Creating CNI manager for ""
	I1202 11:30:30.687244   13676 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1202 11:30:30.687252   13676 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1202 11:30:30.687317   13676 start.go:340] cluster config:
	{Name:download-only-386345 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-386345 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:30:30.688838   13676 out.go:97] Starting "download-only-386345" primary control-plane node in "download-only-386345" cluster
	I1202 11:30:30.688862   13676 cache.go:121] Beginning downloading kic base image for docker with crio
	I1202 11:30:30.690075   13676 out.go:97] Pulling base image v0.0.45-1730888964-19917 ...
	I1202 11:30:30.690103   13676 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:30:30.690200   13676 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1202 11:30:30.705816   13676 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1202 11:30:30.705921   13676 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1202 11:30:30.705939   13676 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1202 11:30:30.705943   13676 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1202 11:30:30.705951   13676 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1202 11:30:30.718247   13676 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 11:30:30.718283   13676 cache.go:56] Caching tarball of preloaded images
	I1202 11:30:30.718431   13676 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:30:30.720224   13676 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1202 11:30:30.720244   13676 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1202 11:30:30.755030   13676 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/20033-6540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1202 11:30:34.359591   13676 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1202 11:30:34.359689   13676 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20033-6540/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1202 11:30:35.214567   13676 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1202 11:30:35.214905   13676 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/download-only-386345/config.json ...
	I1202 11:30:35.214932   13676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/download-only-386345/config.json: {Name:mk3cd5a6313e51f1df80568cca6fe69a4eb32399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 11:30:35.215089   13676 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1202 11:30:35.215230   13676 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20033-6540/.minikube/cache/linux/amd64/v1.31.2/kubectl
	
	
	* The control-plane node download-only-386345 host does not exist
	  To start a cluster, run: "minikube start -p download-only-386345"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-386345
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.07s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-535118 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-535118" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-535118
--- PASS: TestDownloadOnlyKic (1.07s)

                                                
                                    
x
+
TestBinaryMirror (0.75s)

                                                
                                                
=== RUN   TestBinaryMirror
I1202 11:30:37.597880   13299 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-422651 --alsologtostderr --binary-mirror http://127.0.0.1:43737 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-422651" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-422651
--- PASS: TestBinaryMirror (0.75s)

                                                
                                    
x
+
TestOffline (60.67s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-643555 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-643555 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (58.126062918s)
helpers_test.go:175: Cleaning up "offline-crio-643555" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-643555
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-643555: (2.539950154s)
--- PASS: TestOffline (60.67s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-522394
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-522394: exit status 85 (56.955349ms)

                                                
                                                
-- stdout --
	* Profile "addons-522394" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-522394"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-522394
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-522394: exit status 85 (54.646621ms)

                                                
                                                
-- stdout --
	* Profile "addons-522394" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-522394"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (156.27s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-522394 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-522394 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m36.274067353s)
--- PASS: TestAddons/Setup (156.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-522394 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-522394 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.45s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-522394 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-522394 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8afb453e-2f15-4e6a-9ddb-24b626c161e9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8afb453e-2f15-4e6a-9ddb-24b626c161e9] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.003574023s
addons_test.go:633: (dbg) Run:  kubectl --context addons-522394 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-522394 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-522394 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.45s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.791414ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-vdszr" [2c730b2c-d2ab-48fe-8268-0064ccf42ac1] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003424118s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9xwj9" [9c2a618e-304b-4aef-b3a1-3daca132483a] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003916278s
addons_test.go:331: (dbg) Run:  kubectl --context addons-522394 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-522394 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-522394 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.67936414s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-522394 ip
2024/12/02 11:33:45 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-522394 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.48s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.64s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6fzwd" [32335a45-83c8-4baa-848d-917bb1a5bc6d] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004102802s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-522394 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-522394 addons disable inspektor-gadget --alsologtostderr -v=1: (5.639459675s)
--- PASS: TestAddons/parallel/InspektorGadget (11.64s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1202 11:34:04.258744   13299 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1202 11:34:04.262952   13299 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1202 11:34:04.262974   13299 kapi.go:107] duration metric: took 4.239841ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.24701ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-522394 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-522394 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c76a49cd-b4d0-4d21-9078-74bb79f45381] Pending
helpers_test.go:344: "task-pv-pod" [c76a49cd-b4d0-4d21-9078-74bb79f45381] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c76a49cd-b4d0-4d21-9078-74bb79f45381] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003226151s
addons_test.go:511: (dbg) Run:  kubectl --context addons-522394 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-522394 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-522394 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-522394 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-522394 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-522394 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-522394 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [543148f9-cf20-4bf6-b6e3-064e2187291e] Pending
helpers_test.go:344: "task-pv-pod-restore" [543148f9-cf20-4bf6-b6e3-064e2187291e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [543148f9-cf20-4bf6-b6e3-064e2187291e] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003769914s
addons_test.go:553: (dbg) Run:  kubectl --context addons-522394 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-522394 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-522394 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-522394 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-522394 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-522394 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.554280607s)
--- PASS: TestAddons/parallel/CSI (54.85s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-522394 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-wwcjv" [1a3b97ed-20cf-4e25-8a07-c3c604210578] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-wwcjv" [1a3b97ed-20cf-4e25-8a07-c3c604210578] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003604895s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-522394 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-522394 addons disable headlamp --alsologtostderr -v=1: (5.617104784s)
--- PASS: TestAddons/parallel/Headlamp (16.40s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-2jkrf" [b90daa0f-ca6e-4a83-830d-314a3b9ddfbe] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004884531s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-522394 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.47s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.09s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-522394 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-522394 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-522394 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5bfdf587-c4e0-4871-b8da-a3256338bf0a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5bfdf587-c4e0-4871-b8da-a3256338bf0a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5bfdf587-c4e0-4871-b8da-a3256338bf0a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003257153s
addons_test.go:906: (dbg) Run:  kubectl --context addons-522394 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-522394 ssh "cat /opt/local-path-provisioner/pvc-8f0db6fc-4610-41c7-b84f-75a28b3ebb7d_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-522394 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-522394 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-522394 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.09s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-kwcbg" [e45feff4-5960-425e-9363-207b937d3696] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003555409s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-522394 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.46s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-4z68s" [97c474fd-1e1a-4c18-a97e-0202ea2c516b] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004000745s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-522394 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-522394 addons disable yakd --alsologtostderr -v=1: (5.62916064s)
--- PASS: TestAddons/parallel/Yakd (11.63s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-czks8" [28b7071f-be42-4af7-bcb6-44dcf77d9d72] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.004221793s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-522394 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.46s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.06s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-522394
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-522394: (11.811631897s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-522394
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-522394
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-522394
--- PASS: TestAddons/StoppedEnableDisable (12.06s)

                                                
                                    
x
+
TestCertOptions (30.82s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-234275 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-234275 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (26.237331006s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-234275 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-234275 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-234275 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-234275" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-234275
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-234275: (3.766960684s)
--- PASS: TestCertOptions (30.82s)

                                                
                                    
x
+
TestCertExpiration (221.96s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-950456 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-950456 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (25.597324553s)
E1202 12:06:46.079069   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-950456 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-950456 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (13.984220692s)
helpers_test.go:175: Cleaning up "cert-expiration-950456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-950456
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-950456: (2.376368065s)
--- PASS: TestCertExpiration (221.96s)

                                                
                                    
x
+
TestForceSystemdFlag (24.72s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-427534 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-427534 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.266538927s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-427534 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-427534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-427534
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-427534: (3.166321119s)
--- PASS: TestForceSystemdFlag (24.72s)

                                                
                                    
x
+
TestForceSystemdEnv (41.9s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-704457 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-704457 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.333778617s)
helpers_test.go:175: Cleaning up "force-systemd-env-704457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-704457
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-704457: (3.567644911s)
--- PASS: TestForceSystemdEnv (41.90s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.52s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1202 12:08:36.253463   13299 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 12:08:36.253638   13299 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1202 12:08:36.283970   13299 install.go:62] docker-machine-driver-kvm2: exit status 1
W1202 12:08:36.284368   13299 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1202 12:08:36.284450   13299 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3152551412/001/docker-machine-driver-kvm2
I1202 12:08:36.551935   13299 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3152551412/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020] Decompressors:map[bz2:0xc0005cbce0 gz:0xc0005cbce8 tar:0xc0005cbc90 tar.bz2:0xc0005cbca0 tar.gz:0xc0005cbcb0 tar.xz:0xc0005cbcc0 tar.zst:0xc0005cbcd0 tbz2:0xc0005cbca0 tgz:0xc0005cbcb0 txz:0xc0005cbcc0 tzst:0xc0005cbcd0 xz:0xc0005cbcf0 zip:0xc0005cbd10 zst:0xc0005cbcf8] Getters:map[file:0xc00209bbe0 http:0xc000917400 https:0xc000917450] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1202 12:08:36.551984   13299 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3152551412/001/docker-machine-driver-kvm2
I1202 12:08:38.136464   13299 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 12:08:38.136553   13299 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1202 12:08:38.164552   13299 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1202 12:08:38.164584   13299 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1202 12:08:38.164654   13299 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1202 12:08:38.164682   13299 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3152551412/002/docker-machine-driver-kvm2
I1202 12:08:38.327135   13299 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3152551412/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020] Decompressors:map[bz2:0xc0005cbce0 gz:0xc0005cbce8 tar:0xc0005cbc90 tar.bz2:0xc0005cbca0 tar.gz:0xc0005cbcb0 tar.xz:0xc0005cbcc0 tar.zst:0xc0005cbcd0 tbz2:0xc0005cbca0 tgz:0xc0005cbcb0 txz:0xc0005cbcc0 tzst:0xc0005cbcd0 xz:0xc0005cbcf0 zip:0xc0005cbd10 zst:0xc0005cbcf8] Getters:map[file:0xc000983060 http:0xc0007ace10 https:0xc0007ace60] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1202 12:08:38.327176   13299 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3152551412/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.52s)

                                                
                                    
x
+
TestErrorSpam/setup (20.41s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-690587 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-690587 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-690587 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-690587 --driver=docker  --container-runtime=crio: (20.408305685s)
--- PASS: TestErrorSpam/setup (20.41s)

                                                
                                    
x
+
TestErrorSpam/start (0.56s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690587 --log_dir /tmp/nospam-690587 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690587 --log_dir /tmp/nospam-690587 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690587 --log_dir /tmp/nospam-690587 start --dry-run
--- PASS: TestErrorSpam/start (0.56s)

                                                
                                    
x
+
TestErrorSpam/status (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690587 --log_dir /tmp/nospam-690587 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690587 --log_dir /tmp/nospam-690587 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690587 --log_dir /tmp/nospam-690587 status
--- PASS: TestErrorSpam/status (0.87s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690587 --log_dir /tmp/nospam-690587 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690587 --log_dir /tmp/nospam-690587 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690587 --log_dir /tmp/nospam-690587 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690587 --log_dir /tmp/nospam-690587 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690587 --log_dir /tmp/nospam-690587 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690587 --log_dir /tmp/nospam-690587 unpause
--- PASS: TestErrorSpam/unpause (1.56s)

                                                
                                    
x
+
TestErrorSpam/stop (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690587 --log_dir /tmp/nospam-690587 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-690587 --log_dir /tmp/nospam-690587 stop: (1.168661s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690587 --log_dir /tmp/nospam-690587 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-690587 --log_dir /tmp/nospam-690587 stop
--- PASS: TestErrorSpam/stop (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20033-6540/.minikube/files/etc/test/nested/copy/13299/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (42.85s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-181307 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-181307 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (42.845195354s)
--- PASS: TestFunctional/serial/StartWithProxy (42.85s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.04s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1202 11:40:24.770225   13299 config.go:182] Loaded profile config "functional-181307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-181307 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-181307 --alsologtostderr -v=8: (28.036307255s)
functional_test.go:663: soft start took 28.037050596s for "functional-181307" cluster.
I1202 11:40:52.806902   13299 config.go:182] Loaded profile config "functional-181307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (28.04s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-181307 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-181307 cache add registry.k8s.io/pause:3.3: (1.155012033s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-181307 cache add registry.k8s.io/pause:latest: (1.022576205s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-181307 /tmp/TestFunctionalserialCacheCmdcacheadd_local1623967119/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 cache add minikube-local-cache-test:functional-181307
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 cache delete minikube-local-cache-test:functional-181307
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-181307
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-181307 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (266.368445ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 kubectl -- --context functional-181307 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-181307 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-181307 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-181307 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.116864024s)
functional_test.go:761: restart took 39.116977591s for "functional-181307" cluster.
I1202 11:41:38.817808   13299 config.go:182] Loaded profile config "functional-181307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (39.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-181307 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-181307 logs: (1.355673687s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 logs --file /tmp/TestFunctionalserialLogsFileCmd69331175/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-181307 logs --file /tmp/TestFunctionalserialLogsFileCmd69331175/001/logs.txt: (1.354802669s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.3s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-181307 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-181307
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-181307: exit status 115 (318.979078ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31428 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-181307 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.30s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-181307 config get cpus: exit status 14 (70.478931ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-181307 config get cpus: exit status 14 (54.718443ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-181307 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-181307 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 54719: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.29s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-181307 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-181307 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (182.160974ms)

                                                
                                                
-- stdout --
	* [functional-181307] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20033
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20033-6540/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6540/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 11:42:09.993788   53971 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:42:09.993906   53971 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:42:09.993915   53971 out.go:358] Setting ErrFile to fd 2...
	I1202 11:42:09.993919   53971 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:42:09.994081   53971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6540/.minikube/bin
	I1202 11:42:09.994603   53971 out.go:352] Setting JSON to false
	I1202 11:42:09.995578   53971 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1481,"bootTime":1733138249,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 11:42:09.995678   53971 start.go:139] virtualization: kvm guest
	I1202 11:42:09.998240   53971 out.go:177] * [functional-181307] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 11:42:09.999808   53971 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 11:42:09.999867   53971 notify.go:220] Checking for updates...
	I1202 11:42:10.002907   53971 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 11:42:10.004548   53971 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6540/kubeconfig
	I1202 11:42:10.006097   53971 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6540/.minikube
	I1202 11:42:10.007597   53971 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 11:42:10.009222   53971 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 11:42:10.011229   53971 config.go:182] Loaded profile config "functional-181307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:42:10.011761   53971 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 11:42:10.057362   53971 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1202 11:42:10.057516   53971 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 11:42:10.114946   53971 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:55 SystemTime:2024-12-02 11:42:10.096662298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 11:42:10.115094   53971 docker.go:318] overlay module found
	I1202 11:42:10.116915   53971 out.go:177] * Using the docker driver based on existing profile
	I1202 11:42:10.118213   53971 start.go:297] selected driver: docker
	I1202 11:42:10.118232   53971 start.go:901] validating driver "docker" against &{Name:functional-181307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-181307 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:42:10.118321   53971 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 11:42:10.120675   53971 out.go:201] 
	W1202 11:42:10.122266   53971 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1202 11:42:10.123955   53971 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-181307 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-181307 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-181307 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (158.969306ms)

                                                
                                                
-- stdout --
	* [functional-181307] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20033
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20033-6540/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6540/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 11:42:10.389269   54190 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:42:10.389391   54190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:42:10.389402   54190 out.go:358] Setting ErrFile to fd 2...
	I1202 11:42:10.389407   54190 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:42:10.389670   54190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6540/.minikube/bin
	I1202 11:42:10.390497   54190 out.go:352] Setting JSON to false
	I1202 11:42:10.391694   54190 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1481,"bootTime":1733138249,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 11:42:10.391767   54190 start.go:139] virtualization: kvm guest
	I1202 11:42:10.394076   54190 out.go:177] * [functional-181307] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1202 11:42:10.395522   54190 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 11:42:10.395522   54190 notify.go:220] Checking for updates...
	I1202 11:42:10.398335   54190 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 11:42:10.399740   54190 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6540/kubeconfig
	I1202 11:42:10.401113   54190 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6540/.minikube
	I1202 11:42:10.402254   54190 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 11:42:10.403625   54190 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 11:42:10.405461   54190 config.go:182] Loaded profile config "functional-181307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:42:10.405946   54190 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 11:42:10.429589   54190 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1202 11:42:10.429684   54190 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 11:42:10.483161   54190 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:55 SystemTime:2024-12-02 11:42:10.467986902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 11:42:10.483299   54190 docker.go:318] overlay module found
	I1202 11:42:10.486125   54190 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1202 11:42:10.487759   54190 start.go:297] selected driver: docker
	I1202 11:42:10.487780   54190 start.go:901] validating driver "docker" against &{Name:functional-181307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-181307 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 11:42:10.487897   54190 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 11:42:10.490387   54190 out.go:201] 
	W1202 11:42:10.491767   54190 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 11:42:10.493040   54190 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (20.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-181307 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-181307 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-675fk" [b851b805-a690-400b-a873-ca74aa632b5b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-675fk" [b851b805-a690-400b-a873-ca74aa632b5b] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 20.003830145s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30563
functional_test.go:1675: http://192.168.49.2:30563: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-675fk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30563
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (20.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (32.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b9f78385-925e-4581-836c-78db30aa1631] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004219131s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-181307 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-181307 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-181307 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-181307 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8b297408-444f-4d82-b4c8-f898cf07b09a] Pending
helpers_test.go:344: "sp-pod" [8b297408-444f-4d82-b4c8-f898cf07b09a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8b297408-444f-4d82-b4c8-f898cf07b09a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.004076673s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-181307 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-181307 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-181307 delete -f testdata/storage-provisioner/pod.yaml: (1.582100145s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-181307 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [45bb0c7e-13bd-455c-99bd-7e8d84128a9d] Pending
helpers_test.go:344: "sp-pod" [45bb0c7e-13bd-455c-99bd-7e8d84128a9d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003263552s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-181307 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (32.31s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh -n functional-181307 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 cp functional-181307:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3361968375/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh -n functional-181307 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh -n functional-181307 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-181307 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-nnr7j" [d1ea91bc-4605-48d5-ac16-9513c8308dd6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-nnr7j" [d1ea91bc-4605-48d5-ac16-9513c8308dd6] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.00365997s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-181307 exec mysql-6cdb49bbb-nnr7j -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-181307 exec mysql-6cdb49bbb-nnr7j -- mysql -ppassword -e "show databases;": exit status 1 (100.944854ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1202 11:42:04.503820   13299 retry.go:31] will retry after 1.346579294s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-181307 exec mysql-6cdb49bbb-nnr7j -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.74s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/13299/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh "sudo cat /etc/test/nested/copy/13299/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/13299.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh "sudo cat /etc/ssl/certs/13299.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/13299.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh "sudo cat /usr/share/ca-certificates/13299.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/132992.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh "sudo cat /etc/ssl/certs/132992.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/132992.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh "sudo cat /usr/share/ca-certificates/132992.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-181307 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-181307 ssh "sudo systemctl is-active docker": exit status 1 (278.960593ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-181307 ssh "sudo systemctl is-active containerd": exit status 1 (256.982886ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (19.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-181307 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-181307 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-fnq6w" [df22d927-561a-4a7c-8505-585430b09bfe] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-fnq6w" [df22d927-561a-4a7c-8505-585430b09bfe] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 19.004403806s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (19.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "307.586556ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "50.364136ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "307.803944ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "52.857601ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-181307 /tmp/TestFunctionalparallelMountCmdany-port1300997009/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733139708936151575" to /tmp/TestFunctionalparallelMountCmdany-port1300997009/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733139708936151575" to /tmp/TestFunctionalparallelMountCmdany-port1300997009/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733139708936151575" to /tmp/TestFunctionalparallelMountCmdany-port1300997009/001/test-1733139708936151575
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-181307 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (376.381779ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 11:41:49.312895   13299 retry.go:31] will retry after 339.565437ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  2 11:41 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  2 11:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  2 11:41 test-1733139708936151575
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh cat /mount-9p/test-1733139708936151575
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-181307 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [6ee0cae8-5bf6-4661-9063-0db15e030ca3] Pending
helpers_test.go:344: "busybox-mount" [6ee0cae8-5bf6-4661-9063-0db15e030ca3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [6ee0cae8-5bf6-4661-9063-0db15e030ca3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [6ee0cae8-5bf6-4661-9063-0db15e030ca3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 12.003613493s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-181307 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-181307 /tmp/TestFunctionalparallelMountCmdany-port1300997009/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (15.08s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-181307 /tmp/TestFunctionalparallelMountCmdspecific-port1785406629/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-181307 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (254.310436ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 11:42:04.274888   13299 retry.go:31] will retry after 732.940035ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-181307 /tmp/TestFunctionalparallelMountCmdspecific-port1785406629/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-181307 ssh "sudo umount -f /mount-9p": exit status 1 (246.785132ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-181307 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-181307 /tmp/TestFunctionalparallelMountCmdspecific-port1785406629/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-181307 /tmp/TestFunctionalparallelMountCmdVerifyCleanup457959431/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-181307 /tmp/TestFunctionalparallelMountCmdVerifyCleanup457959431/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-181307 /tmp/TestFunctionalparallelMountCmdVerifyCleanup457959431/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-181307 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-181307 /tmp/TestFunctionalparallelMountCmdVerifyCleanup457959431/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-181307 /tmp/TestFunctionalparallelMountCmdVerifyCleanup457959431/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-181307 /tmp/TestFunctionalparallelMountCmdVerifyCleanup457959431/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-181307 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-181307 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-181307 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-181307 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 52905: os: process already finished
helpers_test.go:502: unable to terminate pid 52629: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-181307 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-181307 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8d2936d3-08fb-4b17-b4ed-8ce3968aa02e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [8d2936d3-08fb-4b17-b4ed-8ce3968aa02e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004319399s
I1202 11:42:16.079885   13299 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 service list -o json
functional_test.go:1494: Took "892.138277ms" to run "out/minikube-linux-amd64 -p functional-181307 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-181307 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-181307
localhost/kicbase/echo-server:functional-181307
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241023-a345ebe4
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-181307 image ls --format short --alsologtostderr:
I1202 11:42:17.385808   56438 out.go:345] Setting OutFile to fd 1 ...
I1202 11:42:17.385912   56438 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:42:17.385921   56438 out.go:358] Setting ErrFile to fd 2...
I1202 11:42:17.385925   56438 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:42:17.386100   56438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6540/.minikube/bin
I1202 11:42:17.386741   56438 config.go:182] Loaded profile config "functional-181307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:42:17.386877   56438 config.go:182] Loaded profile config "functional-181307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:42:17.387285   56438 cli_runner.go:164] Run: docker container inspect functional-181307 --format={{.State.Status}}
I1202 11:42:17.405794   56438 ssh_runner.go:195] Run: systemctl --version
I1202 11:42:17.405845   56438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-181307
I1202 11:42:17.430525   56438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/functional-181307/id_rsa Username:docker}
I1202 11:42:17.525119   56438 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-181307 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| docker.io/kindest/kindnetd              | v20241023-a345ebe4 | 9ca7e41918271 | 95MB   |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-181307  | 5a08fded44099 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/nginx                 | alpine             | 91ca84b4f5779 | 54MB   |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/kicbase/echo-server           | functional-181307  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-181307 image ls --format table --alsologtostderr:
I1202 11:42:17.670720   56567 out.go:345] Setting OutFile to fd 1 ...
I1202 11:42:17.670855   56567 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:42:17.670864   56567 out.go:358] Setting ErrFile to fd 2...
I1202 11:42:17.670869   56567 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:42:17.671046   56567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6540/.minikube/bin
I1202 11:42:17.672731   56567 config.go:182] Loaded profile config "functional-181307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:42:17.673058   56567 config.go:182] Loaded profile config "functional-181307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:42:17.673454   56567 cli_runner.go:164] Run: docker container inspect functional-181307 --format={{.State.Status}}
I1202 11:42:17.690474   56567 ssh_runner.go:195] Run: systemctl --version
I1202 11:42:17.690522   56567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-181307
I1202 11:42:17.716972   56567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/functional-181307/id_rsa Username:docker}
I1202 11:42:17.901547   56567 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-181307 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66","repoDigests":["docker.io/library/nginx@sha256:5acf10cd305853dc2271e3c818d342f3aeb3688b1256ab8f035fda04b91ed303","docker.io/library/nginx@sha256:b1f7437a6d0398a47a5d74a1e178ea6fff3ea692c9e41d19c2b3f7ce52cdb371"],"repoTags":["docker.io/library/ngi
nx:alpine"],"size":"53957349"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917
a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5","repoDigests":["docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16","docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"],"repoTags":["docker.io/kindest/kindnetd:v20241023-a345ebe4"],"size":"94958644"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":
["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-181307"],"size":"4943877"},{"id":"5a08fded4409970c2b0fe0174f0b0063626e48a6a70d50bcc1fb93a005097ae7","repoDigests":["localhost/minikube-local-cache-test@sha256:50e0a30fe893d3ffede4720e73a92dac5a920d86466f56c73c3877c1b6683c22"],"repoTags":["localhost/minikube-local-cache-test:functional-181307"],"size":"3330"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133ea
a4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests
":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDiges
ts":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-181307 image ls --format json --alsologtostderr:
I1202 11:42:17.519876   56522 out.go:345] Setting OutFile to fd 1 ...
I1202 11:42:17.519973   56522 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:42:17.519981   56522 out.go:358] Setting ErrFile to fd 2...
I1202 11:42:17.519985   56522 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:42:17.520188   56522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6540/.minikube/bin
I1202 11:42:17.521007   56522 config.go:182] Loaded profile config "functional-181307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:42:17.521156   56522 config.go:182] Loaded profile config "functional-181307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:42:17.521725   56522 cli_runner.go:164] Run: docker container inspect functional-181307 --format={{.State.Status}}
I1202 11:42:17.548110   56522 ssh_runner.go:195] Run: systemctl --version
I1202 11:42:17.548174   56522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-181307
I1202 11:42:17.567661   56522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/functional-181307/id_rsa Username:docker}
I1202 11:42:17.701685   56522 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-181307 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-181307
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5
repoDigests:
- docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16
- docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d
repoTags:
- docker.io/kindest/kindnetd:v20241023-a345ebe4
size: "94958644"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66
repoDigests:
- docker.io/library/nginx@sha256:5acf10cd305853dc2271e3c818d342f3aeb3688b1256ab8f035fda04b91ed303
- docker.io/library/nginx@sha256:b1f7437a6d0398a47a5d74a1e178ea6fff3ea692c9e41d19c2b3f7ce52cdb371
repoTags:
- docker.io/library/nginx:alpine
size: "53957349"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5a08fded4409970c2b0fe0174f0b0063626e48a6a70d50bcc1fb93a005097ae7
repoDigests:
- localhost/minikube-local-cache-test@sha256:50e0a30fe893d3ffede4720e73a92dac5a920d86466f56c73c3877c1b6683c22
repoTags:
- localhost/minikube-local-cache-test:functional-181307
size: "3330"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-181307 image ls --format yaml --alsologtostderr:
I1202 11:42:17.866934   56615 out.go:345] Setting OutFile to fd 1 ...
I1202 11:42:17.867048   56615 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:42:17.867058   56615 out.go:358] Setting ErrFile to fd 2...
I1202 11:42:17.867062   56615 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:42:17.867239   56615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6540/.minikube/bin
I1202 11:42:17.867845   56615 config.go:182] Loaded profile config "functional-181307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:42:17.867938   56615 config.go:182] Loaded profile config "functional-181307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:42:17.868376   56615 cli_runner.go:164] Run: docker container inspect functional-181307 --format={{.State.Status}}
I1202 11:42:17.885254   56615 ssh_runner.go:195] Run: systemctl --version
I1202 11:42:17.885317   56615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-181307
I1202 11:42:17.901999   56615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/functional-181307/id_rsa Username:docker}
I1202 11:42:18.053521   56615 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-181307 ssh pgrep buildkitd: exit status 1 (260.608495ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 image build -t localhost/my-image:functional-181307 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-181307 image build -t localhost/my-image:functional-181307 testdata/build --alsologtostderr: (2.015928602s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-181307 image build -t localhost/my-image:functional-181307 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 07a55168987
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-181307
--> ea4c844b878
Successfully tagged localhost/my-image:functional-181307
ea4c844b8781b04eaa4efe4d7e5250c0ea59e737fbd5ac00a04286a6973d6b27
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-181307 image build -t localhost/my-image:functional-181307 testdata/build --alsologtostderr:
I1202 11:42:18.267736   56761 out.go:345] Setting OutFile to fd 1 ...
I1202 11:42:18.268526   56761 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:42:18.268544   56761 out.go:358] Setting ErrFile to fd 2...
I1202 11:42:18.268551   56761 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:42:18.269027   56761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6540/.minikube/bin
I1202 11:42:18.270394   56761 config.go:182] Loaded profile config "functional-181307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:42:18.270938   56761 config.go:182] Loaded profile config "functional-181307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:42:18.271340   56761 cli_runner.go:164] Run: docker container inspect functional-181307 --format={{.State.Status}}
I1202 11:42:18.288339   56761 ssh_runner.go:195] Run: systemctl --version
I1202 11:42:18.288387   56761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-181307
I1202 11:42:18.306433   56761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/functional-181307/id_rsa Username:docker}
I1202 11:42:18.400849   56761 build_images.go:161] Building image from path: /tmp/build.3795030337.tar
I1202 11:42:18.400917   56761 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1202 11:42:18.410636   56761 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3795030337.tar
I1202 11:42:18.414499   56761 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3795030337.tar: stat -c "%s %y" /var/lib/minikube/build/build.3795030337.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3795030337.tar': No such file or directory
I1202 11:42:18.414534   56761 ssh_runner.go:362] scp /tmp/build.3795030337.tar --> /var/lib/minikube/build/build.3795030337.tar (3072 bytes)
I1202 11:42:18.440142   56761 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3795030337
I1202 11:42:18.448961   56761 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3795030337 -xf /var/lib/minikube/build/build.3795030337.tar
I1202 11:42:18.507648   56761 crio.go:315] Building image: /var/lib/minikube/build/build.3795030337
I1202 11:42:18.507718   56761 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-181307 /var/lib/minikube/build/build.3795030337 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1202 11:42:20.206436   56761 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-181307 /var/lib/minikube/build/build.3795030337 --cgroup-manager=cgroupfs: (1.698687943s)
I1202 11:42:20.206538   56761 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3795030337
I1202 11:42:20.215487   56761 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3795030337.tar
I1202 11:42:20.223552   56761 build_images.go:217] Built localhost/my-image:functional-181307 from /tmp/build.3795030337.tar
I1202 11:42:20.223584   56761 build_images.go:133] succeeded building to: functional-181307
I1202 11:42:20.223591   56761 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 image ls
2024/12/02 11:42:20 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-181307
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31707
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 image load --daemon kicbase/echo-server:functional-181307 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-181307 image load --daemon kicbase/echo-server:functional-181307 --alsologtostderr: (2.243538719s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31707
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 image load --daemon kicbase/echo-server:functional-181307 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-181307
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 image load --daemon kicbase/echo-server:functional-181307 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 image save kicbase/echo-server:functional-181307 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 image rm kicbase/echo-server:functional-181307 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-181307
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-181307 image save --daemon kicbase/echo-server:functional-181307 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-181307
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-181307 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.58.193 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-181307 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-181307
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-181307
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-181307
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (99.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-093284 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1202 11:43:15.248457   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:43:15.254868   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:43:15.266256   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:43:15.287691   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:43:15.329210   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:43:15.410611   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:43:15.572144   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:43:15.893808   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:43:16.535594   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:43:17.817659   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:43:20.379438   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:43:25.501232   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:43:35.743104   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:43:56.225224   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-093284 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m39.034327455s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (99.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093284 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093284 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-093284 -- rollout status deployment/busybox: (2.361967261s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093284 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093284 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093284 -- exec busybox-7dff88458-fwgsp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093284 -- exec busybox-7dff88458-srsvt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093284 -- exec busybox-7dff88458-wljw5 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093284 -- exec busybox-7dff88458-fwgsp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093284 -- exec busybox-7dff88458-srsvt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093284 -- exec busybox-7dff88458-wljw5 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093284 -- exec busybox-7dff88458-fwgsp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093284 -- exec busybox-7dff88458-srsvt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093284 -- exec busybox-7dff88458-wljw5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093284 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093284 -- exec busybox-7dff88458-fwgsp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093284 -- exec busybox-7dff88458-fwgsp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093284 -- exec busybox-7dff88458-srsvt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093284 -- exec busybox-7dff88458-srsvt -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093284 -- exec busybox-7dff88458-wljw5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-093284 -- exec busybox-7dff88458-wljw5 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (32.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-093284 -v=7 --alsologtostderr
E1202 11:44:37.187402   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-093284 -v=7 --alsologtostderr: (31.384176113s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (32.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-093284 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 cp testdata/cp-test.txt ha-093284:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 cp ha-093284:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile164155172/001/cp-test_ha-093284.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 cp ha-093284:/home/docker/cp-test.txt ha-093284-m02:/home/docker/cp-test_ha-093284_ha-093284-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m02 "sudo cat /home/docker/cp-test_ha-093284_ha-093284-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 cp ha-093284:/home/docker/cp-test.txt ha-093284-m03:/home/docker/cp-test_ha-093284_ha-093284-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m03 "sudo cat /home/docker/cp-test_ha-093284_ha-093284-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 cp ha-093284:/home/docker/cp-test.txt ha-093284-m04:/home/docker/cp-test_ha-093284_ha-093284-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m04 "sudo cat /home/docker/cp-test_ha-093284_ha-093284-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 cp testdata/cp-test.txt ha-093284-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 cp ha-093284-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile164155172/001/cp-test_ha-093284-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 cp ha-093284-m02:/home/docker/cp-test.txt ha-093284:/home/docker/cp-test_ha-093284-m02_ha-093284.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284 "sudo cat /home/docker/cp-test_ha-093284-m02_ha-093284.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 cp ha-093284-m02:/home/docker/cp-test.txt ha-093284-m03:/home/docker/cp-test_ha-093284-m02_ha-093284-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m03 "sudo cat /home/docker/cp-test_ha-093284-m02_ha-093284-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 cp ha-093284-m02:/home/docker/cp-test.txt ha-093284-m04:/home/docker/cp-test_ha-093284-m02_ha-093284-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m04 "sudo cat /home/docker/cp-test_ha-093284-m02_ha-093284-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 cp testdata/cp-test.txt ha-093284-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 cp ha-093284-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile164155172/001/cp-test_ha-093284-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 cp ha-093284-m03:/home/docker/cp-test.txt ha-093284:/home/docker/cp-test_ha-093284-m03_ha-093284.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284 "sudo cat /home/docker/cp-test_ha-093284-m03_ha-093284.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 cp ha-093284-m03:/home/docker/cp-test.txt ha-093284-m02:/home/docker/cp-test_ha-093284-m03_ha-093284-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m02 "sudo cat /home/docker/cp-test_ha-093284-m03_ha-093284-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 cp ha-093284-m03:/home/docker/cp-test.txt ha-093284-m04:/home/docker/cp-test_ha-093284-m03_ha-093284-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m04 "sudo cat /home/docker/cp-test_ha-093284-m03_ha-093284-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 cp testdata/cp-test.txt ha-093284-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 cp ha-093284-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile164155172/001/cp-test_ha-093284-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 cp ha-093284-m04:/home/docker/cp-test.txt ha-093284:/home/docker/cp-test_ha-093284-m04_ha-093284.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284 "sudo cat /home/docker/cp-test_ha-093284-m04_ha-093284.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 cp ha-093284-m04:/home/docker/cp-test.txt ha-093284-m02:/home/docker/cp-test_ha-093284-m04_ha-093284-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m02 "sudo cat /home/docker/cp-test_ha-093284-m04_ha-093284-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 cp ha-093284-m04:/home/docker/cp-test.txt ha-093284-m03:/home/docker/cp-test_ha-093284-m04_ha-093284-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 ssh -n ha-093284-m03 "sudo cat /home/docker/cp-test_ha-093284-m04_ha-093284-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-093284 node stop m02 -v=7 --alsologtostderr: (11.827956224s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-093284 status -v=7 --alsologtostderr: exit status 7 (659.040764ms)

                                                
                                                
-- stdout --
	ha-093284
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-093284-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-093284-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-093284-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 11:45:27.729303   78094 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:45:27.729565   78094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:45:27.729576   78094 out.go:358] Setting ErrFile to fd 2...
	I1202 11:45:27.729580   78094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:45:27.729745   78094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6540/.minikube/bin
	I1202 11:45:27.729928   78094 out.go:352] Setting JSON to false
	I1202 11:45:27.729954   78094 mustload.go:65] Loading cluster: ha-093284
	I1202 11:45:27.730010   78094 notify.go:220] Checking for updates...
	I1202 11:45:27.730556   78094 config.go:182] Loaded profile config "ha-093284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:45:27.730584   78094 status.go:174] checking status of ha-093284 ...
	I1202 11:45:27.731095   78094 cli_runner.go:164] Run: docker container inspect ha-093284 --format={{.State.Status}}
	I1202 11:45:27.749142   78094 status.go:371] ha-093284 host status = "Running" (err=<nil>)
	I1202 11:45:27.749169   78094 host.go:66] Checking if "ha-093284" exists ...
	I1202 11:45:27.749424   78094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-093284
	I1202 11:45:27.768415   78094 host.go:66] Checking if "ha-093284" exists ...
	I1202 11:45:27.768672   78094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 11:45:27.768717   78094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284
	I1202 11:45:27.786284   78094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/ha-093284/id_rsa Username:docker}
	I1202 11:45:27.877404   78094 ssh_runner.go:195] Run: systemctl --version
	I1202 11:45:27.881386   78094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:45:27.891887   78094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 11:45:27.938123   78094 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-12-02 11:45:27.928634951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 11:45:27.938702   78094 kubeconfig.go:125] found "ha-093284" server: "https://192.168.49.254:8443"
	I1202 11:45:27.938735   78094 api_server.go:166] Checking apiserver status ...
	I1202 11:45:27.938776   78094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 11:45:27.949795   78094 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1514/cgroup
	I1202 11:45:27.959067   78094 api_server.go:182] apiserver freezer: "6:freezer:/docker/16f4bc0b0c0c820c3ad09be303eb4dc9a60ec063091fc826e7ac3f40338ef242/crio/crio-cf821e0e1954f2ae202aa87dfefee031c34d9c897330948708009f51b90bbfbc"
	I1202 11:45:27.959208   78094 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/16f4bc0b0c0c820c3ad09be303eb4dc9a60ec063091fc826e7ac3f40338ef242/crio/crio-cf821e0e1954f2ae202aa87dfefee031c34d9c897330948708009f51b90bbfbc/freezer.state
	I1202 11:45:27.967111   78094 api_server.go:204] freezer state: "THAWED"
	I1202 11:45:27.967164   78094 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1202 11:45:27.972236   78094 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1202 11:45:27.972330   78094 status.go:463] ha-093284 apiserver status = Running (err=<nil>)
	I1202 11:45:27.972350   78094 status.go:176] ha-093284 status: &{Name:ha-093284 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 11:45:27.972366   78094 status.go:174] checking status of ha-093284-m02 ...
	I1202 11:45:27.972714   78094 cli_runner.go:164] Run: docker container inspect ha-093284-m02 --format={{.State.Status}}
	I1202 11:45:27.990424   78094 status.go:371] ha-093284-m02 host status = "Stopped" (err=<nil>)
	I1202 11:45:27.990454   78094 status.go:384] host is not running, skipping remaining checks
	I1202 11:45:27.990462   78094 status.go:176] ha-093284-m02 status: &{Name:ha-093284-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 11:45:27.990482   78094 status.go:174] checking status of ha-093284-m03 ...
	I1202 11:45:27.990740   78094 cli_runner.go:164] Run: docker container inspect ha-093284-m03 --format={{.State.Status}}
	I1202 11:45:28.007929   78094 status.go:371] ha-093284-m03 host status = "Running" (err=<nil>)
	I1202 11:45:28.007956   78094 host.go:66] Checking if "ha-093284-m03" exists ...
	I1202 11:45:28.008234   78094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-093284-m03
	I1202 11:45:28.025445   78094 host.go:66] Checking if "ha-093284-m03" exists ...
	I1202 11:45:28.025691   78094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 11:45:28.025726   78094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284-m03
	I1202 11:45:28.043922   78094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32794 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/ha-093284-m03/id_rsa Username:docker}
	I1202 11:45:28.137551   78094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:45:28.148638   78094 kubeconfig.go:125] found "ha-093284" server: "https://192.168.49.254:8443"
	I1202 11:45:28.148665   78094 api_server.go:166] Checking apiserver status ...
	I1202 11:45:28.148695   78094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 11:45:28.158932   78094 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1432/cgroup
	I1202 11:45:28.168063   78094 api_server.go:182] apiserver freezer: "6:freezer:/docker/6b34603e138473739dfbcff310b2dcd32b120b67af46bd70ed62f49d8b886d6e/crio/crio-558a24d91d605b6d8bc8c8c41cde56ead23a0113d85f31b7a94bca4f14c00075"
	I1202 11:45:28.168151   78094 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6b34603e138473739dfbcff310b2dcd32b120b67af46bd70ed62f49d8b886d6e/crio/crio-558a24d91d605b6d8bc8c8c41cde56ead23a0113d85f31b7a94bca4f14c00075/freezer.state
	I1202 11:45:28.177273   78094 api_server.go:204] freezer state: "THAWED"
	I1202 11:45:28.177300   78094 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1202 11:45:28.180957   78094 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1202 11:45:28.180979   78094 status.go:463] ha-093284-m03 apiserver status = Running (err=<nil>)
	I1202 11:45:28.180987   78094 status.go:176] ha-093284-m03 status: &{Name:ha-093284-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 11:45:28.181007   78094 status.go:174] checking status of ha-093284-m04 ...
	I1202 11:45:28.181251   78094 cli_runner.go:164] Run: docker container inspect ha-093284-m04 --format={{.State.Status}}
	I1202 11:45:28.200067   78094 status.go:371] ha-093284-m04 host status = "Running" (err=<nil>)
	I1202 11:45:28.200090   78094 host.go:66] Checking if "ha-093284-m04" exists ...
	I1202 11:45:28.200346   78094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-093284-m04
	I1202 11:45:28.217540   78094 host.go:66] Checking if "ha-093284-m04" exists ...
	I1202 11:45:28.217772   78094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 11:45:28.217805   78094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-093284-m04
	I1202 11:45:28.236114   78094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32799 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/ha-093284-m04/id_rsa Username:docker}
	I1202 11:45:28.329050   78094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:45:28.339813   78094 status.go:176] ha-093284-m04 status: &{Name:ha-093284-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (25.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-093284 node start m02 -v=7 --alsologtostderr: (23.928943627s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-093284 status -v=7 --alsologtostderr: (1.132214673s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (25.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (150.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-093284 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-093284 -v=7 --alsologtostderr
E1202 11:45:59.109769   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-093284 -v=7 --alsologtostderr: (36.588845602s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-093284 --wait=true -v=7 --alsologtostderr
E1202 11:46:46.079266   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:46:46.085656   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:46:46.097014   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:46:46.118590   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:46:46.160018   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:46:46.241490   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:46:46.403219   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:46:46.725127   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:46:47.366990   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:46:48.648967   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:46:51.211900   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:46:56.333799   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:47:06.575897   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:47:27.057483   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:48:08.018791   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
E1202 11:48:15.248515   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-093284 --wait=true -v=7 --alsologtostderr: (1m54.243483328s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-093284
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (150.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-093284 node delete m03 -v=7 --alsologtostderr: (10.573299824s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 stop -v=7 --alsologtostderr
E1202 11:48:42.952455   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-093284 stop -v=7 --alsologtostderr: (35.316422041s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-093284 status -v=7 --alsologtostderr: exit status 7 (102.893059ms)

                                                
                                                
-- stdout --
	ha-093284
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-093284-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-093284-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 11:49:13.398323   95317 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:49:13.398454   95317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:49:13.398463   95317 out.go:358] Setting ErrFile to fd 2...
	I1202 11:49:13.398468   95317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:49:13.398642   95317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6540/.minikube/bin
	I1202 11:49:13.398794   95317 out.go:352] Setting JSON to false
	I1202 11:49:13.398819   95317 mustload.go:65] Loading cluster: ha-093284
	I1202 11:49:13.398937   95317 notify.go:220] Checking for updates...
	I1202 11:49:13.399260   95317 config.go:182] Loaded profile config "ha-093284": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:49:13.399288   95317 status.go:174] checking status of ha-093284 ...
	I1202 11:49:13.399770   95317 cli_runner.go:164] Run: docker container inspect ha-093284 --format={{.State.Status}}
	I1202 11:49:13.418796   95317 status.go:371] ha-093284 host status = "Stopped" (err=<nil>)
	I1202 11:49:13.418832   95317 status.go:384] host is not running, skipping remaining checks
	I1202 11:49:13.418840   95317 status.go:176] ha-093284 status: &{Name:ha-093284 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 11:49:13.418874   95317 status.go:174] checking status of ha-093284-m02 ...
	I1202 11:49:13.419191   95317 cli_runner.go:164] Run: docker container inspect ha-093284-m02 --format={{.State.Status}}
	I1202 11:49:13.436601   95317 status.go:371] ha-093284-m02 host status = "Stopped" (err=<nil>)
	I1202 11:49:13.436641   95317 status.go:384] host is not running, skipping remaining checks
	I1202 11:49:13.436652   95317 status.go:176] ha-093284-m02 status: &{Name:ha-093284-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 11:49:13.436679   95317 status.go:174] checking status of ha-093284-m04 ...
	I1202 11:49:13.436939   95317 cli_runner.go:164] Run: docker container inspect ha-093284-m04 --format={{.State.Status}}
	I1202 11:49:13.453930   95317 status.go:371] ha-093284-m04 host status = "Stopped" (err=<nil>)
	I1202 11:49:13.453953   95317 status.go:384] host is not running, skipping remaining checks
	I1202 11:49:13.453959   95317 status.go:176] ha-093284-m04 status: &{Name:ha-093284-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (37.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-093284 --control-plane -v=7 --alsologtostderr
E1202 11:51:46.079550   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-093284 --control-plane -v=7 --alsologtostderr: (36.902298192s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-093284 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (37.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (42.47s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-230482 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1202 11:52:13.782584   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-230482 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (42.46877438s)
--- PASS: TestJSONOutput/start/Command (42.47s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-230482 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-230482 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-230482 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-230482 --output=json --user=testUser: (5.75216016s)
--- PASS: TestJSONOutput/stop/Command (5.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-665306 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-665306 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (66.765314ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d7744506-d3c6-4ddc-8e63-e0f87ee6a8a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-665306] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"be84b3d5-a2a6-48d8-8aab-c69d9a7f322f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20033"}}
	{"specversion":"1.0","id":"b9ea73a1-9a20-40bb-8c7f-05e2eecc224e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"75a4e6e3-1c1b-4b9c-b2f3-04e63a7c8626","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20033-6540/kubeconfig"}}
	{"specversion":"1.0","id":"872b1082-58f5-4b21-93f5-35b5b2a96674","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6540/.minikube"}}
	{"specversion":"1.0","id":"6fd5fb9a-c106-43f4-887d-61b07d6fb6d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0bf585a3-c782-4f91-b9fa-ea26803beb0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a3d9c9da-d3c7-4984-854f-0c4957520cd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-665306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-665306
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.17s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-712327 --network=
E1202 11:53:15.248542   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-712327 --network=: (27.093721133s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-712327" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-712327
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-712327: (2.063296604s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.17s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.02s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-232853 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-232853 --network=bridge: (21.090155918s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-232853" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-232853
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-232853: (1.907792573s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.02s)

                                                
                                    
x
+
TestKicExistingNetwork (26.6s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1202 11:53:55.594189   13299 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1202 11:53:55.611167   13299 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1202 11:53:55.611251   13299 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1202 11:53:55.611307   13299 cli_runner.go:164] Run: docker network inspect existing-network
W1202 11:53:55.628603   13299 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1202 11:53:55.628630   13299 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1202 11:53:55.628645   13299 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1202 11:53:55.628805   13299 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1202 11:53:55.646213   13299 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-236a02a97ab3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:38:d0:1e:4e} reservation:<nil>}
I1202 11:53:55.646826   13299 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001549660}
I1202 11:53:55.646868   13299 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1202 11:53:55.646927   13299 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1202 11:53:55.705453   13299 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-473201 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-473201 --network=existing-network: (24.589659554s)
helpers_test.go:175: Cleaning up "existing-network-473201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-473201
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-473201: (1.870149464s)
I1202 11:54:22.182282   13299 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (26.60s)

                                                
                                    
x
+
TestKicCustomSubnet (26.52s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-719399 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-719399 --subnet=192.168.60.0/24: (24.407066705s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-719399 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-719399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-719399
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-719399: (2.090470462s)
--- PASS: TestKicCustomSubnet (26.52s)

                                                
                                    
x
+
TestKicStaticIP (26.17s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-421594 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-421594 --static-ip=192.168.200.200: (23.975268049s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-421594 ip
helpers_test.go:175: Cleaning up "static-ip-421594" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-421594
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-421594: (2.071236981s)
--- PASS: TestKicStaticIP (26.17s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (53.44s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-133987 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-133987 --driver=docker  --container-runtime=crio: (23.642545485s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-149085 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-149085 --driver=docker  --container-runtime=crio: (24.608547322s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-133987
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-149085
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-149085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-149085
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-149085: (1.852523339s)
helpers_test.go:175: Cleaning up "first-133987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-133987
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-133987: (2.189404679s)
--- PASS: TestMinikubeProfile (53.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-033077 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-033077 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.140757755s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-033077 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-045540 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-045540 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.25923556s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-045540 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-033077 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-033077 --alsologtostderr -v=5: (1.592576395s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-045540 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-045540
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-045540: (1.176385961s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.29s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-045540
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-045540: (6.293125229s)
--- PASS: TestMountStart/serial/RestartStopped (7.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-045540 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (72.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-926398 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1202 11:56:46.079075   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-926398 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m11.83555066s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (72.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-926398 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-926398 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-926398 -- rollout status deployment/busybox: (1.618157933s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-926398 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-926398 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-926398 -- exec busybox-7dff88458-fr6cd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-926398 -- exec busybox-7dff88458-ph2dx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-926398 -- exec busybox-7dff88458-fr6cd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-926398 -- exec busybox-7dff88458-ph2dx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-926398 -- exec busybox-7dff88458-fr6cd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-926398 -- exec busybox-7dff88458-ph2dx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.03s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-926398 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-926398 -- exec busybox-7dff88458-fr6cd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-926398 -- exec busybox-7dff88458-fr6cd -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-926398 -- exec busybox-7dff88458-ph2dx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-926398 -- exec busybox-7dff88458-ph2dx -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-926398 -v 3 --alsologtostderr
E1202 11:58:15.247957   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-926398 -v 3 --alsologtostderr: (27.199014342s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.81s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-926398 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 cp testdata/cp-test.txt multinode-926398:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 ssh -n multinode-926398 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 cp multinode-926398:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1754036167/001/cp-test_multinode-926398.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 ssh -n multinode-926398 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 cp multinode-926398:/home/docker/cp-test.txt multinode-926398-m02:/home/docker/cp-test_multinode-926398_multinode-926398-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 ssh -n multinode-926398 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 ssh -n multinode-926398-m02 "sudo cat /home/docker/cp-test_multinode-926398_multinode-926398-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 cp multinode-926398:/home/docker/cp-test.txt multinode-926398-m03:/home/docker/cp-test_multinode-926398_multinode-926398-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 ssh -n multinode-926398 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 ssh -n multinode-926398-m03 "sudo cat /home/docker/cp-test_multinode-926398_multinode-926398-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 cp testdata/cp-test.txt multinode-926398-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 ssh -n multinode-926398-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 cp multinode-926398-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1754036167/001/cp-test_multinode-926398-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 ssh -n multinode-926398-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 cp multinode-926398-m02:/home/docker/cp-test.txt multinode-926398:/home/docker/cp-test_multinode-926398-m02_multinode-926398.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 ssh -n multinode-926398-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 ssh -n multinode-926398 "sudo cat /home/docker/cp-test_multinode-926398-m02_multinode-926398.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 cp multinode-926398-m02:/home/docker/cp-test.txt multinode-926398-m03:/home/docker/cp-test_multinode-926398-m02_multinode-926398-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 ssh -n multinode-926398-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 ssh -n multinode-926398-m03 "sudo cat /home/docker/cp-test_multinode-926398-m02_multinode-926398-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 cp testdata/cp-test.txt multinode-926398-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 ssh -n multinode-926398-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 cp multinode-926398-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1754036167/001/cp-test_multinode-926398-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 ssh -n multinode-926398-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 cp multinode-926398-m03:/home/docker/cp-test.txt multinode-926398:/home/docker/cp-test_multinode-926398-m03_multinode-926398.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 ssh -n multinode-926398-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 ssh -n multinode-926398 "sudo cat /home/docker/cp-test_multinode-926398-m03_multinode-926398.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 cp multinode-926398-m03:/home/docker/cp-test.txt multinode-926398-m02:/home/docker/cp-test_multinode-926398-m03_multinode-926398-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 ssh -n multinode-926398-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 ssh -n multinode-926398-m02 "sudo cat /home/docker/cp-test_multinode-926398-m03_multinode-926398-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-926398 node stop m03: (1.17706026s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-926398 status: exit status 7 (458.19442ms)

                                                
                                                
-- stdout --
	multinode-926398
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-926398-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-926398-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-926398 status --alsologtostderr: exit status 7 (456.576795ms)

                                                
                                                
-- stdout --
	multinode-926398
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-926398-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-926398-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 11:58:29.841931  162631 out.go:345] Setting OutFile to fd 1 ...
	I1202 11:58:29.842075  162631 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:58:29.842089  162631 out.go:358] Setting ErrFile to fd 2...
	I1202 11:58:29.842096  162631 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 11:58:29.842285  162631 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6540/.minikube/bin
	I1202 11:58:29.842455  162631 out.go:352] Setting JSON to false
	I1202 11:58:29.842480  162631 mustload.go:65] Loading cluster: multinode-926398
	I1202 11:58:29.842533  162631 notify.go:220] Checking for updates...
	I1202 11:58:29.843022  162631 config.go:182] Loaded profile config "multinode-926398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 11:58:29.843056  162631 status.go:174] checking status of multinode-926398 ...
	I1202 11:58:29.843589  162631 cli_runner.go:164] Run: docker container inspect multinode-926398 --format={{.State.Status}}
	I1202 11:58:29.862420  162631 status.go:371] multinode-926398 host status = "Running" (err=<nil>)
	I1202 11:58:29.862459  162631 host.go:66] Checking if "multinode-926398" exists ...
	I1202 11:58:29.862763  162631 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-926398
	I1202 11:58:29.879930  162631 host.go:66] Checking if "multinode-926398" exists ...
	I1202 11:58:29.880208  162631 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 11:58:29.880297  162631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-926398
	I1202 11:58:29.897403  162631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32904 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/multinode-926398/id_rsa Username:docker}
	I1202 11:58:29.985306  162631 ssh_runner.go:195] Run: systemctl --version
	I1202 11:58:29.989313  162631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:58:30.000291  162631 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 11:58:30.044399  162631 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-12-02 11:58:30.035783272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 11:58:30.044996  162631 kubeconfig.go:125] found "multinode-926398" server: "https://192.168.67.2:8443"
	I1202 11:58:30.045042  162631 api_server.go:166] Checking apiserver status ...
	I1202 11:58:30.045080  162631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 11:58:30.055592  162631 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1505/cgroup
	I1202 11:58:30.064451  162631 api_server.go:182] apiserver freezer: "6:freezer:/docker/2565af30b108ba9da646c0f84eca04866a513c9bda04c7d96ec781dec072df2e/crio/crio-2378b98f84a2a52e47b446f169cf328be7081017ef76b9769b13ca03ef920552"
	I1202 11:58:30.064504  162631 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2565af30b108ba9da646c0f84eca04866a513c9bda04c7d96ec781dec072df2e/crio/crio-2378b98f84a2a52e47b446f169cf328be7081017ef76b9769b13ca03ef920552/freezer.state
	I1202 11:58:30.072369  162631 api_server.go:204] freezer state: "THAWED"
	I1202 11:58:30.072394  162631 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1202 11:58:30.075924  162631 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1202 11:58:30.075949  162631 status.go:463] multinode-926398 apiserver status = Running (err=<nil>)
	I1202 11:58:30.075960  162631 status.go:176] multinode-926398 status: &{Name:multinode-926398 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 11:58:30.075983  162631 status.go:174] checking status of multinode-926398-m02 ...
	I1202 11:58:30.076243  162631 cli_runner.go:164] Run: docker container inspect multinode-926398-m02 --format={{.State.Status}}
	I1202 11:58:30.093621  162631 status.go:371] multinode-926398-m02 host status = "Running" (err=<nil>)
	I1202 11:58:30.093646  162631 host.go:66] Checking if "multinode-926398-m02" exists ...
	I1202 11:58:30.093943  162631 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-926398-m02
	I1202 11:58:30.111416  162631 host.go:66] Checking if "multinode-926398-m02" exists ...
	I1202 11:58:30.111657  162631 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 11:58:30.111690  162631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-926398-m02
	I1202 11:58:30.128985  162631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/20033-6540/.minikube/machines/multinode-926398-m02/id_rsa Username:docker}
	I1202 11:58:30.221228  162631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 11:58:30.232548  162631 status.go:176] multinode-926398-m02 status: &{Name:multinode-926398-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1202 11:58:30.232598  162631 status.go:174] checking status of multinode-926398-m03 ...
	I1202 11:58:30.232842  162631 cli_runner.go:164] Run: docker container inspect multinode-926398-m03 --format={{.State.Status}}
	I1202 11:58:30.250148  162631 status.go:371] multinode-926398-m03 host status = "Stopped" (err=<nil>)
	I1202 11:58:30.250175  162631 status.go:384] host is not running, skipping remaining checks
	I1202 11:58:30.250183  162631 status.go:176] multinode-926398-m03 status: &{Name:multinode-926398-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.09s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-926398 node start m03 -v=7 --alsologtostderr: (8.278045614s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-926398
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-926398
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-926398: (24.676002778s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-926398 --wait=true -v=8 --alsologtostderr
E1202 11:59:38.313969   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-926398 --wait=true -v=8 --alsologtostderr: (53.481695076s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-926398
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-926398 node delete m03: (4.424874811s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-926398 stop: (23.553293756s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-926398 status: exit status 7 (89.138066ms)

                                                
                                                
-- stdout --
	multinode-926398
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-926398-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-926398 status --alsologtostderr: exit status 7 (86.461296ms)

                                                
                                                
-- stdout --
	multinode-926398
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-926398-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 12:00:26.129170  171944 out.go:345] Setting OutFile to fd 1 ...
	I1202 12:00:26.129278  171944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:00:26.129286  171944 out.go:358] Setting ErrFile to fd 2...
	I1202 12:00:26.129289  171944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:00:26.129497  171944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6540/.minikube/bin
	I1202 12:00:26.129671  171944 out.go:352] Setting JSON to false
	I1202 12:00:26.129699  171944 mustload.go:65] Loading cluster: multinode-926398
	I1202 12:00:26.129743  171944 notify.go:220] Checking for updates...
	I1202 12:00:26.130151  171944 config.go:182] Loaded profile config "multinode-926398": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:00:26.130181  171944 status.go:174] checking status of multinode-926398 ...
	I1202 12:00:26.130669  171944 cli_runner.go:164] Run: docker container inspect multinode-926398 --format={{.State.Status}}
	I1202 12:00:26.149335  171944 status.go:371] multinode-926398 host status = "Stopped" (err=<nil>)
	I1202 12:00:26.149363  171944 status.go:384] host is not running, skipping remaining checks
	I1202 12:00:26.149369  171944 status.go:176] multinode-926398 status: &{Name:multinode-926398 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 12:00:26.149408  171944 status.go:174] checking status of multinode-926398-m02 ...
	I1202 12:00:26.149672  171944 cli_runner.go:164] Run: docker container inspect multinode-926398-m02 --format={{.State.Status}}
	I1202 12:00:26.167581  171944 status.go:371] multinode-926398-m02 host status = "Stopped" (err=<nil>)
	I1202 12:00:26.167603  171944 status.go:384] host is not running, skipping remaining checks
	I1202 12:00:26.167610  171944 status.go:176] multinode-926398-m02 status: &{Name:multinode-926398-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-926398 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-926398 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (51.516900787s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-926398 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.07s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-926398
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-926398-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-926398-m02 --driver=docker  --container-runtime=crio: exit status 14 (67.425583ms)

                                                
                                                
-- stdout --
	* [multinode-926398-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20033
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20033-6540/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6540/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-926398-m02' is duplicated with machine name 'multinode-926398-m02' in profile 'multinode-926398'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-926398-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-926398-m03 --driver=docker  --container-runtime=crio: (23.532312216s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-926398
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-926398: exit status 80 (266.441918ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-926398 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-926398-m03 already exists in multinode-926398-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-926398-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-926398-m03: (1.843062804s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.76s)

                                                
                                    
x
+
TestPreload (103.97s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-592793 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-592793 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m15.431484955s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-592793 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-592793 image pull gcr.io/k8s-minikube/busybox: (1.303584461s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-592793
E1202 12:03:09.146692   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-592793: (5.710327999s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-592793 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1202 12:03:15.248410   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-592793 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (18.992170083s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-592793 image list
helpers_test.go:175: Cleaning up "test-preload-592793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-592793
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-592793: (2.301678969s)
--- PASS: TestPreload (103.97s)

                                                
                                    
x
+
TestScheduledStopUnix (96.27s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-146717 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-146717 --memory=2048 --driver=docker  --container-runtime=crio: (19.948429947s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-146717 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-146717 -n scheduled-stop-146717
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-146717 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1202 12:03:52.272035   13299 retry.go:31] will retry after 92.945µs: open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/scheduled-stop-146717/pid: no such file or directory
I1202 12:03:52.273207   13299 retry.go:31] will retry after 98.955µs: open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/scheduled-stop-146717/pid: no such file or directory
I1202 12:03:52.274368   13299 retry.go:31] will retry after 211.575µs: open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/scheduled-stop-146717/pid: no such file or directory
I1202 12:03:52.275518   13299 retry.go:31] will retry after 402.937µs: open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/scheduled-stop-146717/pid: no such file or directory
I1202 12:03:52.276652   13299 retry.go:31] will retry after 624.974µs: open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/scheduled-stop-146717/pid: no such file or directory
I1202 12:03:52.277791   13299 retry.go:31] will retry after 555.247µs: open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/scheduled-stop-146717/pid: no such file or directory
I1202 12:03:52.278918   13299 retry.go:31] will retry after 1.488992ms: open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/scheduled-stop-146717/pid: no such file or directory
I1202 12:03:52.281133   13299 retry.go:31] will retry after 1.784751ms: open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/scheduled-stop-146717/pid: no such file or directory
I1202 12:03:52.283376   13299 retry.go:31] will retry after 2.767709ms: open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/scheduled-stop-146717/pid: no such file or directory
I1202 12:03:52.286627   13299 retry.go:31] will retry after 4.603946ms: open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/scheduled-stop-146717/pid: no such file or directory
I1202 12:03:52.291872   13299 retry.go:31] will retry after 5.357271ms: open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/scheduled-stop-146717/pid: no such file or directory
I1202 12:03:52.298166   13299 retry.go:31] will retry after 11.635339ms: open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/scheduled-stop-146717/pid: no such file or directory
I1202 12:03:52.310437   13299 retry.go:31] will retry after 7.208524ms: open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/scheduled-stop-146717/pid: no such file or directory
I1202 12:03:52.318687   13299 retry.go:31] will retry after 24.512212ms: open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/scheduled-stop-146717/pid: no such file or directory
I1202 12:03:52.343944   13299 retry.go:31] will retry after 32.561951ms: open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/scheduled-stop-146717/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-146717 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-146717 -n scheduled-stop-146717
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-146717
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-146717 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-146717
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-146717: exit status 7 (67.18774ms)

                                                
                                                
-- stdout --
	scheduled-stop-146717
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-146717 -n scheduled-stop-146717
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-146717 -n scheduled-stop-146717: exit status 7 (69.910347ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-146717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-146717
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-146717: (5.015323705s)
--- PASS: TestScheduledStopUnix (96.27s)

                                                
                                    
x
+
TestInsufficientStorage (9.99s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-090509 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-090509 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.636238272s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"50359f96-95a3-4a79-be28-f1e95acba3ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-090509] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7710655e-6a8b-4060-9722-c585fa0176e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20033"}}
	{"specversion":"1.0","id":"a345b538-bd1e-4bb7-9674-1713dc827d5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2862754b-b222-461c-bc89-8957d4232b3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20033-6540/kubeconfig"}}
	{"specversion":"1.0","id":"0b49fbc2-7e4e-426d-98b8-58016e322bd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6540/.minikube"}}
	{"specversion":"1.0","id":"270f6a05-f7e1-44f2-b726-f9ad1ec63e46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cb53309c-586f-4fb3-b767-28573b32a75b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"27523bc8-5a5b-435d-95ee-71d67a1b79fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3a82491b-b4ce-4ad6-b91c-992b1e5cc443","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"303d3f39-c1b2-4488-a861-4ab04314e8c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"14ddfba5-4d5f-4aee-baef-aeb9950dd71b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"87da01c3-f620-498e-aa04-f1700ee746d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-090509\" primary control-plane node in \"insufficient-storage-090509\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cc1dfce7-019c-4518-ae3c-e606e3da4078","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1730888964-19917 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc208189-8a7c-4e5b-b1da-3746fa79023b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f33052a3-a1d7-40a4-8d59-eaefe66dc3b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-090509 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-090509 --output=json --layout=cluster: exit status 7 (261.160453ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-090509","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-090509","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 12:05:16.093699  194322 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-090509" does not appear in /home/jenkins/minikube-integration/20033-6540/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-090509 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-090509 --output=json --layout=cluster: exit status 7 (262.556995ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-090509","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-090509","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1202 12:05:16.357259  194421 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-090509" does not appear in /home/jenkins/minikube-integration/20033-6540/kubeconfig
	E1202 12:05:16.367014  194421 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/insufficient-storage-090509/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-090509" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-090509
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-090509: (1.830519436s)
--- PASS: TestInsufficientStorage (9.99s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (57.39s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1070447936 start -p running-upgrade-938058 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1070447936 start -p running-upgrade-938058 --memory=2200 --vm-driver=docker  --container-runtime=crio: (30.764328714s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-938058 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-938058 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.657589003s)
helpers_test.go:175: Cleaning up "running-upgrade-938058" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-938058
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-938058: (2.3812264s)
--- PASS: TestRunningBinaryUpgrade (57.39s)

                                                
                                    
x
+
TestKubernetesUpgrade (346.87s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-793878 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-793878 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.18405263s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-793878
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-793878: (1.252672787s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-793878 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-793878 status --format={{.Host}}: exit status 7 (85.753897ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-793878 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-793878 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.143607734s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-793878 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-793878 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-793878 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (82.805831ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-793878] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20033
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20033-6540/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6540/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-793878
	    minikube start -p kubernetes-upgrade-793878 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7938782 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-793878 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-793878 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-793878 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.843961535s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-793878" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-793878
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-793878: (2.21490988s)
--- PASS: TestKubernetesUpgrade (346.87s)

                                                
                                    
x
+
TestMissingContainerUpgrade (130.55s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3359618095 start -p missing-upgrade-654415 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3359618095 start -p missing-upgrade-654415 --memory=2200 --driver=docker  --container-runtime=crio: (1m0.058672521s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-654415
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-654415: (11.257369839s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-654415
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-654415 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-654415 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (56.749881323s)
helpers_test.go:175: Cleaning up "missing-upgrade-654415" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-654415
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-654415: (1.962680793s)
--- PASS: TestMissingContainerUpgrade (130.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-671738 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-671738 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (85.84858ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-671738] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20033
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20033-6540/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6540/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-671738 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-671738 --driver=docker  --container-runtime=crio: (38.162982938s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-671738 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-671738 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-671738 --no-kubernetes --driver=docker  --container-runtime=crio: (16.061663671s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-671738 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-671738 status -o json: exit status 2 (313.725675ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-671738","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-671738
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-671738: (1.999758966s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-671738 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-671738 --no-kubernetes --driver=docker  --container-runtime=crio: (5.979554905s)
--- PASS: TestNoKubernetes/serial/Start (5.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-671738 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-671738 "sudo systemctl is-active --quiet service kubelet": exit status 1 (268.802228ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (2.70687685s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.536870785s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-671738
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-671738: (2.769416948s)
--- PASS: TestNoKubernetes/serial/Stop (2.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-671738 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-671738 --driver=docker  --container-runtime=crio: (6.879033557s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-671738 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-671738 "sudo systemctl is-active --quiet service kubelet": exit status 1 (292.877541ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (66.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.148696622 start -p stopped-upgrade-509524 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.148696622 start -p stopped-upgrade-509524 --memory=2200 --vm-driver=docker  --container-runtime=crio: (35.152805579s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.148696622 -p stopped-upgrade-509524 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.148696622 -p stopped-upgrade-509524 stop: (7.660506685s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-509524 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-509524 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.367134643s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (66.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-509524
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.80s)

                                                
                                    
x
+
TestPause/serial/Start (45.75s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-637603 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1202 12:08:15.247902   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-637603 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (45.74582032s)
--- PASS: TestPause/serial/Start (45.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-431516 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-431516 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (152.975611ms)

                                                
                                                
-- stdout --
	* [false-431516] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20033
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20033-6540/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6540/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 12:08:29.533047  245358 out.go:345] Setting OutFile to fd 1 ...
	I1202 12:08:29.533382  245358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:08:29.533398  245358 out.go:358] Setting ErrFile to fd 2...
	I1202 12:08:29.533411  245358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1202 12:08:29.533717  245358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6540/.minikube/bin
	I1202 12:08:29.534479  245358 out.go:352] Setting JSON to false
	I1202 12:08:29.535942  245358 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3061,"bootTime":1733138249,"procs":342,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 12:08:29.536027  245358 start.go:139] virtualization: kvm guest
	I1202 12:08:29.539026  245358 out.go:177] * [false-431516] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1202 12:08:29.540545  245358 out.go:177]   - MINIKUBE_LOCATION=20033
	I1202 12:08:29.540596  245358 notify.go:220] Checking for updates...
	I1202 12:08:29.543585  245358 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 12:08:29.544960  245358 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20033-6540/kubeconfig
	I1202 12:08:29.546338  245358 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6540/.minikube
	I1202 12:08:29.547538  245358 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 12:08:29.548925  245358 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 12:08:29.550908  245358 config.go:182] Loaded profile config "cert-expiration-950456": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:08:29.551063  245358 config.go:182] Loaded profile config "kubernetes-upgrade-793878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:08:29.551194  245358 config.go:182] Loaded profile config "pause-637603": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1202 12:08:29.551308  245358 driver.go:394] Setting default libvirt URI to qemu:///system
	I1202 12:08:29.573636  245358 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1202 12:08:29.573757  245358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1202 12:08:29.619018  245358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:74 SystemTime:2024-12-02 12:08:29.609674692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1202 12:08:29.619130  245358 docker.go:318] overlay module found
	I1202 12:08:29.622117  245358 out.go:177] * Using the docker driver based on user configuration
	I1202 12:08:29.623610  245358 start.go:297] selected driver: docker
	I1202 12:08:29.623630  245358 start.go:901] validating driver "docker" against <nil>
	I1202 12:08:29.623642  245358 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 12:08:29.626219  245358 out.go:201] 
	W1202 12:08:29.627555  245358 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1202 12:08:29.628895  245358 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-431516 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-431516

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-431516

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-431516

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-431516

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-431516

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-431516

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-431516

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-431516

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-431516

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-431516

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-431516

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-431516" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-431516" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 02 Dec 2024 12:06:42 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-950456
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 02 Dec 2024 12:07:34 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-793878
contexts:
- context:
cluster: cert-expiration-950456
extensions:
- extension:
last-update: Mon, 02 Dec 2024 12:06:42 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-950456
name: cert-expiration-950456
- context:
cluster: kubernetes-upgrade-793878
user: kubernetes-upgrade-793878
name: kubernetes-upgrade-793878
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-950456
user:
client-certificate: /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/cert-expiration-950456/client.crt
client-key: /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/cert-expiration-950456/client.key
- name: kubernetes-upgrade-793878
user:
client-certificate: /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/kubernetes-upgrade-793878/client.crt
client-key: /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/kubernetes-upgrade-793878/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-431516

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-431516"

                                                
                                                
----------------------- debugLogs end: false-431516 [took: 3.014410127s] --------------------------------
helpers_test.go:175: Cleaning up "false-431516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-431516
--- PASS: TestNetworkPlugins/group/false (3.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (122.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-910761 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-910761 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m2.888036418s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (122.89s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.73s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-637603 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-637603 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.714055603s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.73s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-637603 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-637603 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-637603 --output=json --layout=cluster: exit status 2 (299.728736ms)

                                                
                                                
-- stdout --
	{"Name":"pause-637603","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-637603","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-637603 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.71s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-637603 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.71s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.63s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-637603 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-637603 --alsologtostderr -v=5: (2.625321495s)
--- PASS: TestPause/serial/DeletePaused (2.63s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.95s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.874171732s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-637603
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-637603: exit status 1 (26.841933ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-637603: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (14.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (58.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-706387 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-706387 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (58.169384018s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (58.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (42.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-983523 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-983523 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (42.971763032s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (42.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-910761 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ac914429-74ed-4f56-a551-867694fb3278] Pending
helpers_test.go:344: "busybox" [ac914429-74ed-4f56-a551-867694fb3278] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ac914429-74ed-4f56-a551-867694fb3278] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004736218s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-910761 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-983523 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [93e2939b-754b-4641-be5e-08fe9fe84fa2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [93e2939b-754b-4641-be5e-08fe9fe84fa2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005482301s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-983523 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-910761 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-910761 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-706387 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [750637f9-2c28-41b7-8f25-776c86372999] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [750637f9-2c28-41b7-8f25-776c86372999] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.0037793s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-706387 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-910761 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-910761 --alsologtostderr -v=3: (11.973042225s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-983523 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-983523 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-983523 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-983523 --alsologtostderr -v=3: (11.850157266s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-706387 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-706387 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-706387 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-706387 --alsologtostderr -v=3: (13.665240021s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-910761 -n old-k8s-version-910761
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-910761 -n old-k8s-version-910761: exit status 7 (67.752399ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-910761 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (124.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-910761 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-910761 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m3.840796713s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-910761 -n old-k8s-version-910761
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (124.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-983523 -n embed-certs-983523
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-983523 -n embed-certs-983523: exit status 7 (66.950833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-983523 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-983523 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-983523 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m27.233180665s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-983523 -n embed-certs-983523
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-706387 -n no-preload-706387
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-706387 -n no-preload-706387: exit status 7 (88.301678ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-706387 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (263.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-706387 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1202 12:11:46.079211   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/functional-181307/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-706387 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m23.405199719s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-706387 -n no-preload-706387
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (263.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-754500 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-754500 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (44.173713821s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-vzq4r" [adc0d712-a98c-478e-817b-3ce746d87a4c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004466206s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-754500 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [407c255a-c190-4e08-ba7a-a085b3b66a46] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [407c255a-c190-4e08-ba7a-a085b3b66a46] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004629878s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-754500 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-vzq4r" [adc0d712-a98c-478e-817b-3ce746d87a4c] Running
E1202 12:13:15.248412   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003428683s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-910761 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-754500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-754500 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-910761 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-910761 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-910761 -n old-k8s-version-910761
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-910761 -n old-k8s-version-910761: exit status 2 (296.060348ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-910761 -n old-k8s-version-910761
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-910761 -n old-k8s-version-910761: exit status 2 (312.308022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-910761 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-910761 -n old-k8s-version-910761
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-910761 -n old-k8s-version-910761
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-754500 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-754500 --alsologtostderr -v=3: (11.900708363s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-492605 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-492605 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (28.511758338s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754500 -n default-k8s-diff-port-754500
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754500 -n default-k8s-diff-port-754500: exit status 7 (76.125661ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-754500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (297.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-754500 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-754500 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m57.497735122s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-754500 -n default-k8s-diff-port-754500
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (297.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-492605 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-492605 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.1387088s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-492605 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-492605 --alsologtostderr -v=3: (2.104185402s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-492605 -n newest-cni-492605
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-492605 -n newest-cni-492605: exit status 7 (69.252924ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-492605 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-492605 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-492605 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (13.240474603s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-492605 -n newest-cni-492605
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-492605 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-492605 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-492605 --alsologtostderr -v=1: (1.028833761s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-492605 -n newest-cni-492605
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-492605 -n newest-cni-492605: exit status 2 (302.442459ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-492605 -n newest-cni-492605
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-492605 -n newest-cni-492605: exit status 2 (289.233581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-492605 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-492605 -n newest-cni-492605
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-492605 -n newest-cni-492605
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-431516 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-431516 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (42.72305787s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-431516 "pgrep -a kubelet"
I1202 12:14:58.764811   13299 config.go:182] Loaded profile config "auto-431516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-431516 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zshbl" [88507560-29dd-45d6-a836-440fa3f4a656] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zshbl" [88507560-29dd-45d6-a836-440fa3f4a656] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004172284s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-431516 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-431516 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-431516 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (45.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-431516 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-431516 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (45.97726029s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (45.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-f6xp5" [2b2962b6-5829-400b-b74e-b3fbba72b083] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003705101s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-f6xp5" [2b2962b6-5829-400b-b74e-b3fbba72b083] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004429525s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-983523 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2fljs" [8da7c1e3-c62c-4b1e-8767-42ba908097bb] Running
E1202 12:15:42.911719   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/old-k8s-version-910761/client.crt: no such file or directory" logger="UnhandledError"
E1202 12:15:42.918104   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/old-k8s-version-910761/client.crt: no such file or directory" logger="UnhandledError"
E1202 12:15:42.929544   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/old-k8s-version-910761/client.crt: no such file or directory" logger="UnhandledError"
E1202 12:15:42.950940   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/old-k8s-version-910761/client.crt: no such file or directory" logger="UnhandledError"
E1202 12:15:42.992367   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/old-k8s-version-910761/client.crt: no such file or directory" logger="UnhandledError"
E1202 12:15:43.073930   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/old-k8s-version-910761/client.crt: no such file or directory" logger="UnhandledError"
E1202 12:15:43.235637   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/old-k8s-version-910761/client.crt: no such file or directory" logger="UnhandledError"
E1202 12:15:43.557460   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/old-k8s-version-910761/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007228069s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-983523 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-983523 --alsologtostderr -v=1
E1202 12:15:44.199297   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/old-k8s-version-910761/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-983523 -n embed-certs-983523
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-983523 -n embed-certs-983523: exit status 2 (315.268237ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-983523 -n embed-certs-983523
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-983523 -n embed-certs-983523: exit status 2 (318.212713ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-983523 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-983523 -n embed-certs-983523
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-983523 -n embed-certs-983523
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2fljs" [8da7c1e3-c62c-4b1e-8767-42ba908097bb] Running
E1202 12:15:45.481065   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/old-k8s-version-910761/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00384144s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-706387 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (53.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-431516 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-431516 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (53.095964387s)
--- PASS: TestNetworkPlugins/group/calico/Start (53.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-706387 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-706387 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-706387 -n no-preload-706387
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-706387 -n no-preload-706387: exit status 2 (314.095802ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-706387 -n no-preload-706387
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-706387 -n no-preload-706387: exit status 2 (309.47904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-706387 --alsologtostderr -v=1
E1202 12:15:53.164470   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/old-k8s-version-910761/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-706387 -n no-preload-706387
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-706387 -n no-preload-706387
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (43.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-431516 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1202 12:16:03.406676   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/old-k8s-version-910761/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-431516 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (43.575231709s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (43.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-w654p" [0fe2e01b-c302-46ec-b677-081f72a5a542] Running
E1202 12:16:18.316064   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004285268s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-431516 "pgrep -a kubelet"
I1202 12:16:18.828223   13299 config.go:182] Loaded profile config "kindnet-431516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-431516 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2tvs9" [1a13e2f9-573b-419c-8d6a-43cf425efa1a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2tvs9" [1a13e2f9-573b-419c-8d6a-43cf425efa1a] Running
E1202 12:16:23.888416   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/old-k8s-version-910761/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004055508s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-431516 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-431516 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-431516 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-431516 "pgrep -a kubelet"
I1202 12:16:41.690395   13299 config.go:182] Loaded profile config "custom-flannel-431516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-431516 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sjj27" [d5c12873-e886-40ae-a4d3-6c50a85396b0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sjj27" [d5c12873-e886-40ae-a4d3-6c50a85396b0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004345455s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-c949v" [8568817f-8b95-4908-b825-e781d9026083] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004848269s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-431516 "pgrep -a kubelet"
I1202 12:16:49.309818   13299 config.go:182] Loaded profile config "calico-431516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-431516 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sxm5f" [3bb00dcc-2c48-4567-a7e3-9dc8b009de68] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sxm5f" [3bb00dcc-2c48-4567-a7e3-9dc8b009de68] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004398539s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (38.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-431516 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-431516 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (38.80456516s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (38.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-431516 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-431516 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-431516 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-431516 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-431516 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-431516 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-431516 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-431516 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (52.015520397s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (61.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-431516 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-431516 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m1.117727111s)
--- PASS: TestNetworkPlugins/group/bridge/Start (61.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-431516 "pgrep -a kubelet"
I1202 12:17:29.028071   13299 config.go:182] Loaded profile config "enable-default-cni-431516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-431516 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2wsmv" [b129a77a-ebe2-4092-9dd7-0e27b0d2257f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2wsmv" [b129a77a-ebe2-4092-9dd7-0e27b0d2257f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004325435s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-431516 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-431516 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-431516 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-rmkzg" [fec911a5-b2d1-405f-bdee-20fc4796c908] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003877581s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-431516 "pgrep -a kubelet"
I1202 12:18:10.903027   13299 config.go:182] Loaded profile config "flannel-431516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-431516 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4rrz7" [5f453dd6-a86b-4e4b-8e02-bb7eee225c1e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4rrz7" [5f453dd6-a86b-4e4b-8e02-bb7eee225c1e] Running
E1202 12:18:15.248331   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/addons-522394/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003533032s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-431516 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-431516 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-431516 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-431516 "pgrep -a kubelet"
I1202 12:18:22.799343   13299 config.go:182] Loaded profile config "bridge-431516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-431516 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-872qg" [86c26c64-6430-4232-aca6-54a9e07b15e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1202 12:18:26.771872   13299 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/old-k8s-version-910761/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-872qg" [86c26c64-6430-4232-aca6-54a9e07b15e3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004235978s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-k4fff" [63b863e5-0d95-4b31-bd49-c1a4a817e3f7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004696088s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-431516 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-431516 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-431516 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-k4fff" [63b863e5-0d95-4b31-bd49-c1a4a817e3f7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00443202s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-754500 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-754500 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-754500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-754500 -n default-k8s-diff-port-754500
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-754500 -n default-k8s-diff-port-754500: exit status 2 (310.451468ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-754500 -n default-k8s-diff-port-754500
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-754500 -n default-k8s-diff-port-754500: exit status 2 (311.284801ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-754500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-754500 -n default-k8s-diff-port-754500
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-754500 -n default-k8s-diff-port-754500
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.77s)

                                                
                                    

Test skip (26/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-522394 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-062756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-062756
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-431516 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-431516

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-431516

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-431516

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-431516

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-431516

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-431516

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-431516

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-431516

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-431516

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-431516

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-431516

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-431516" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-431516" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 02 Dec 2024 12:06:42 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-950456
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 02 Dec 2024 12:07:34 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-793878
contexts:
- context:
cluster: cert-expiration-950456
extensions:
- extension:
last-update: Mon, 02 Dec 2024 12:06:42 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-950456
name: cert-expiration-950456
- context:
cluster: kubernetes-upgrade-793878
user: kubernetes-upgrade-793878
name: kubernetes-upgrade-793878
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-950456
user:
client-certificate: /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/cert-expiration-950456/client.crt
client-key: /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/cert-expiration-950456/client.key
- name: kubernetes-upgrade-793878
user:
client-certificate: /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/kubernetes-upgrade-793878/client.crt
client-key: /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/kubernetes-upgrade-793878/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-431516

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-431516"

                                                
                                                
----------------------- debugLogs end: kubenet-431516 [took: 3.146810009s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-431516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-431516
--- SKIP: TestNetworkPlugins/group/kubenet (3.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-431516 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-431516

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-431516

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-431516

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-431516

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-431516

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-431516

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-431516

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-431516

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-431516

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-431516

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-431516

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-431516" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-431516

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-431516

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-431516

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-431516

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-431516" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-431516" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 02 Dec 2024 12:06:42 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-950456
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20033-6540/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 02 Dec 2024 12:07:34 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-793878
contexts:
- context:
cluster: cert-expiration-950456
extensions:
- extension:
last-update: Mon, 02 Dec 2024 12:06:42 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-950456
name: cert-expiration-950456
- context:
cluster: kubernetes-upgrade-793878
user: kubernetes-upgrade-793878
name: kubernetes-upgrade-793878
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-950456
user:
client-certificate: /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/cert-expiration-950456/client.crt
client-key: /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/cert-expiration-950456/client.key
- name: kubernetes-upgrade-793878
user:
client-certificate: /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/kubernetes-upgrade-793878/client.crt
client-key: /home/jenkins/minikube-integration/20033-6540/.minikube/profiles/kubernetes-upgrade-793878/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-431516

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-431516" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-431516"

                                                
                                                
----------------------- debugLogs end: cilium-431516 [took: 3.275561432s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-431516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-431516
--- SKIP: TestNetworkPlugins/group/cilium (3.44s)

                                                
                                    
Copied to clipboard