Test Report: Docker_Linux_crio 19468

                    
                      91a16964608358fea9174134e48bcab54b5c9be6:2024-08-19:35860
                    
                

Test fail (3/328)

Order failed test Duration
34 TestAddons/parallel/Ingress 149.97
36 TestAddons/parallel/MetricsServer 296.97
174 TestMultiControlPlane/serial/RestartCluster 124.61
x
+
TestAddons/parallel/Ingress (149.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-142951 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-142951 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-142951 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [beea72ae-40cd-4972-9ea0-60896c86895d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [beea72ae-40cd-4972-9ea0-60896c86895d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.002824442s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-142951 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-142951 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.6918044s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-142951 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-142951 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-142951 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-142951 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-142951 addons disable ingress --alsologtostderr -v=1: (7.573410974s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-142951
helpers_test.go:235: (dbg) docker inspect addons-142951:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "010445039d67d19f3294dd18609a0145ef64742dff1c5c12c0255ddf925bb383",
	        "Created": "2024-08-19T17:57:11.410180908Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33038,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T17:57:11.531105098Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:197224e1b90979b98de246567852a03b60e3aa31dcd0de02a456282118daeb84",
	        "ResolvConfPath": "/var/lib/docker/containers/010445039d67d19f3294dd18609a0145ef64742dff1c5c12c0255ddf925bb383/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/010445039d67d19f3294dd18609a0145ef64742dff1c5c12c0255ddf925bb383/hostname",
	        "HostsPath": "/var/lib/docker/containers/010445039d67d19f3294dd18609a0145ef64742dff1c5c12c0255ddf925bb383/hosts",
	        "LogPath": "/var/lib/docker/containers/010445039d67d19f3294dd18609a0145ef64742dff1c5c12c0255ddf925bb383/010445039d67d19f3294dd18609a0145ef64742dff1c5c12c0255ddf925bb383-json.log",
	        "Name": "/addons-142951",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-142951:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-142951",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bdabc06652087fcf79fb8e7008c3c883cfb0adc84e9a9f46231b4a492ec7f0b3-init/diff:/var/lib/docker/overlay2/0c2c9fdec01bef3a098fb8513a31b324e686eebb183f0aaad2be170703b9d191/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bdabc06652087fcf79fb8e7008c3c883cfb0adc84e9a9f46231b4a492ec7f0b3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bdabc06652087fcf79fb8e7008c3c883cfb0adc84e9a9f46231b4a492ec7f0b3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bdabc06652087fcf79fb8e7008c3c883cfb0adc84e9a9f46231b4a492ec7f0b3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-142951",
	                "Source": "/var/lib/docker/volumes/addons-142951/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-142951",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-142951",
	                "name.minikube.sigs.k8s.io": "addons-142951",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ce2a820d1fe646445374e09740096c8a15f3cd8ce78c5388c2cd41d7746ff653",
	            "SandboxKey": "/var/run/docker/netns/ce2a820d1fe6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-142951": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "26871bf810f1f705018de8bb3fd749522c8877a8a4a89af41f8045bb058152ac",
	                    "EndpointID": "535bad70c7ea5053a6056c83f3e2e7bb077f3683164fd3bf0359ff7c672ae775",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-142951",
	                        "010445039d67"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-142951 -n addons-142951
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-142951 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-142951 logs -n 25: (1.024135311s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-314754 | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC |                     |
	|         | download-docker-314754                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-314754                                                                   | download-docker-314754 | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-755146   | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC |                     |
	|         | binary-mirror-755146                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44393                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-755146                                                                     | binary-mirror-755146   | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	| addons  | disable dashboard -p                                                                        | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC |                     |
	|         | addons-142951                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC |                     |
	|         | addons-142951                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-142951 --wait=true                                                                | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:59 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-142951 addons disable                                                                | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 17:59 UTC | 19 Aug 24 17:59 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | addons-142951                                                                               |                        |         |         |                     |                     |
	| ip      | addons-142951 ip                                                                            | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	| addons  | addons-142951 addons disable                                                                | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-142951 addons disable                                                                | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-142951 ssh cat                                                                       | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | /opt/local-path-provisioner/pvc-c78e1662-15f1-40c8-8ca4-6b6d5b18666a_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-142951 addons disable                                                                | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-142951 addons disable                                                                | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | -p addons-142951                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | addons-142951                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | -p addons-142951                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-142951 ssh curl -s                                                                   | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-142951 addons disable                                                                | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-142951 addons                                                                        | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-142951 addons                                                                        | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-142951 ip                                                                            | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:02 UTC | 19 Aug 24 18:02 UTC |
	| addons  | addons-142951 addons disable                                                                | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:02 UTC | 19 Aug 24 18:02 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-142951 addons disable                                                                | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:02 UTC | 19 Aug 24 18:02 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:56:47
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:56:47.724282   32277 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:56:47.724500   32277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:56:47.724507   32277 out.go:358] Setting ErrFile to fd 2...
	I0819 17:56:47.724512   32277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:56:47.724666   32277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-24160/.minikube/bin
	I0819 17:56:47.725274   32277 out.go:352] Setting JSON to false
	I0819 17:56:47.726088   32277 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5958,"bootTime":1724084250,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 17:56:47.726138   32277 start.go:139] virtualization: kvm guest
	I0819 17:56:47.728120   32277 out.go:177] * [addons-142951] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 17:56:47.729252   32277 notify.go:220] Checking for updates...
	I0819 17:56:47.729261   32277 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 17:56:47.730359   32277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:56:47.731562   32277 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-24160/kubeconfig
	I0819 17:56:47.732699   32277 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-24160/.minikube
	I0819 17:56:47.733695   32277 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 17:56:47.734711   32277 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 17:56:47.735820   32277 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:56:47.755762   32277 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 17:56:47.755860   32277 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:56:47.801221   32277 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 17:56:47.792853029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 17:56:47.801319   32277 docker.go:307] overlay module found
	I0819 17:56:47.803138   32277 out.go:177] * Using the docker driver based on user configuration
	I0819 17:56:47.804344   32277 start.go:297] selected driver: docker
	I0819 17:56:47.804360   32277 start.go:901] validating driver "docker" against <nil>
	I0819 17:56:47.804370   32277 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 17:56:47.805056   32277 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:56:47.847937   32277 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 17:56:47.840245494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 17:56:47.848077   32277 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 17:56:47.848271   32277 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:56:47.849900   32277 out.go:177] * Using Docker driver with root privileges
	I0819 17:56:47.851222   32277 cni.go:84] Creating CNI manager for ""
	I0819 17:56:47.851236   32277 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 17:56:47.851245   32277 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 17:56:47.851290   32277 start.go:340] cluster config:
	{Name:addons-142951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-142951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:56:47.852500   32277 out.go:177] * Starting "addons-142951" primary control-plane node in "addons-142951" cluster
	I0819 17:56:47.853652   32277 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 17:56:47.854887   32277 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 17:56:47.855882   32277 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:56:47.855905   32277 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-24160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 17:56:47.855905   32277 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 17:56:47.855913   32277 cache.go:56] Caching tarball of preloaded images
	I0819 17:56:47.855974   32277 preload.go:172] Found /home/jenkins/minikube-integration/19468-24160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 17:56:47.855985   32277 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 17:56:47.856266   32277 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/config.json ...
	I0819 17:56:47.856286   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/config.json: {Name:mke776199edf729a366eaa93bf40a10a81fb3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:56:47.871758   32277 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 17:56:47.871861   32277 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 17:56:47.871877   32277 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 17:56:47.871881   32277 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 17:56:47.871891   32277 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 17:56:47.871897   32277 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 17:56:59.357589   32277 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 17:56:59.357631   32277 cache.go:194] Successfully downloaded all kic artifacts
	I0819 17:56:59.357674   32277 start.go:360] acquireMachinesLock for addons-142951: {Name:mke80a9d847714c8b2e4c449106f243d13aae04d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:56:59.357780   32277 start.go:364] duration metric: took 82.307µs to acquireMachinesLock for "addons-142951"
	I0819 17:56:59.357806   32277 start.go:93] Provisioning new machine with config: &{Name:addons-142951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-142951 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:56:59.357905   32277 start.go:125] createHost starting for "" (driver="docker")
	I0819 17:56:59.359740   32277 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0819 17:56:59.359962   32277 start.go:159] libmachine.API.Create for "addons-142951" (driver="docker")
	I0819 17:56:59.359991   32277 client.go:168] LocalClient.Create starting
	I0819 17:56:59.360104   32277 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem
	I0819 17:56:59.620701   32277 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/cert.pem
	I0819 17:56:59.739545   32277 cli_runner.go:164] Run: docker network inspect addons-142951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0819 17:56:59.754767   32277 cli_runner.go:211] docker network inspect addons-142951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0819 17:56:59.754830   32277 network_create.go:284] running [docker network inspect addons-142951] to gather additional debugging logs...
	I0819 17:56:59.754850   32277 cli_runner.go:164] Run: docker network inspect addons-142951
	W0819 17:56:59.769322   32277 cli_runner.go:211] docker network inspect addons-142951 returned with exit code 1
	I0819 17:56:59.769346   32277 network_create.go:287] error running [docker network inspect addons-142951]: docker network inspect addons-142951: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-142951 not found
	I0819 17:56:59.769361   32277 network_create.go:289] output of [docker network inspect addons-142951]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-142951 not found
	
	** /stderr **
	I0819 17:56:59.769474   32277 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 17:56:59.784393   32277 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00192ed30}
	I0819 17:56:59.784434   32277 network_create.go:124] attempt to create docker network addons-142951 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0819 17:56:59.784468   32277 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-142951 addons-142951
	I0819 17:56:59.838175   32277 network_create.go:108] docker network addons-142951 192.168.49.0/24 created
	I0819 17:56:59.838210   32277 kic.go:121] calculated static IP "192.168.49.2" for the "addons-142951" container
	I0819 17:56:59.838280   32277 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0819 17:56:59.852260   32277 cli_runner.go:164] Run: docker volume create addons-142951 --label name.minikube.sigs.k8s.io=addons-142951 --label created_by.minikube.sigs.k8s.io=true
	I0819 17:56:59.867964   32277 oci.go:103] Successfully created a docker volume addons-142951
	I0819 17:56:59.868023   32277 cli_runner.go:164] Run: docker run --rm --name addons-142951-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-142951 --entrypoint /usr/bin/test -v addons-142951:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
	I0819 17:57:07.040061   32277 cli_runner.go:217] Completed: docker run --rm --name addons-142951-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-142951 --entrypoint /usr/bin/test -v addons-142951:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib: (7.17200162s)
	I0819 17:57:07.040088   32277 oci.go:107] Successfully prepared a docker volume addons-142951
	I0819 17:57:07.040104   32277 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:57:07.040123   32277 kic.go:194] Starting extracting preloaded images to volume ...
	I0819 17:57:07.040169   32277 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19468-24160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-142951:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
	I0819 17:57:11.349443   32277 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19468-24160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-142951:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir: (4.309219526s)
	I0819 17:57:11.349493   32277 kic.go:203] duration metric: took 4.309367412s to extract preloaded images to volume ...
	W0819 17:57:11.349636   32277 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0819 17:57:11.349748   32277 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0819 17:57:11.396521   32277 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-142951 --name addons-142951 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-142951 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-142951 --network addons-142951 --ip 192.168.49.2 --volume addons-142951:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d
	I0819 17:57:11.691494   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Running}}
	I0819 17:57:11.708615   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:11.725662   32277 cli_runner.go:164] Run: docker exec addons-142951 stat /var/lib/dpkg/alternatives/iptables
	I0819 17:57:11.765459   32277 oci.go:144] the created container "addons-142951" has a running status.
	I0819 17:57:11.765487   32277 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa...
	I0819 17:57:11.912954   32277 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0819 17:57:11.933490   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:11.949448   32277 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0819 17:57:11.949474   32277 kic_runner.go:114] Args: [docker exec --privileged addons-142951 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0819 17:57:11.988957   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:12.014280   32277 machine.go:93] provisionDockerMachine start ...
	I0819 17:57:12.014360   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:12.030653   32277 main.go:141] libmachine: Using SSH client type: native
	I0819 17:57:12.030839   32277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 17:57:12.030851   32277 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 17:57:12.031443   32277 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35324->127.0.0.1:32768: read: connection reset by peer
	I0819 17:57:15.148129   32277 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-142951
	
	I0819 17:57:15.148158   32277 ubuntu.go:169] provisioning hostname "addons-142951"
	I0819 17:57:15.148205   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:15.163942   32277 main.go:141] libmachine: Using SSH client type: native
	I0819 17:57:15.164100   32277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 17:57:15.164113   32277 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-142951 && echo "addons-142951" | sudo tee /etc/hostname
	I0819 17:57:15.286780   32277 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-142951
	
	I0819 17:57:15.286850   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:15.303846   32277 main.go:141] libmachine: Using SSH client type: native
	I0819 17:57:15.304033   32277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 17:57:15.304051   32277 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-142951' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-142951/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-142951' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 17:57:15.420851   32277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:57:15.420877   32277 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19468-24160/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-24160/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-24160/.minikube}
	I0819 17:57:15.420909   32277 ubuntu.go:177] setting up certificates
	I0819 17:57:15.420921   32277 provision.go:84] configureAuth start
	I0819 17:57:15.420975   32277 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-142951
	I0819 17:57:15.437050   32277 provision.go:143] copyHostCerts
	I0819 17:57:15.437111   32277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-24160/.minikube/key.pem (1679 bytes)
	I0819 17:57:15.437259   32277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-24160/.minikube/ca.pem (1078 bytes)
	I0819 17:57:15.437331   32277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-24160/.minikube/cert.pem (1123 bytes)
	I0819 17:57:15.437396   32277 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca-key.pem org=jenkins.addons-142951 san=[127.0.0.1 192.168.49.2 addons-142951 localhost minikube]
	I0819 17:57:15.625405   32277 provision.go:177] copyRemoteCerts
	I0819 17:57:15.625457   32277 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 17:57:15.625490   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:15.641251   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:15.729170   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 17:57:15.749007   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 17:57:15.768646   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 17:57:15.788220   32277 provision.go:87] duration metric: took 367.281976ms to configureAuth
	I0819 17:57:15.788247   32277 ubuntu.go:193] setting minikube options for container-runtime
	I0819 17:57:15.788394   32277 config.go:182] Loaded profile config "addons-142951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:57:15.788473   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:15.806070   32277 main.go:141] libmachine: Using SSH client type: native
	I0819 17:57:15.806237   32277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 17:57:15.806252   32277 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 17:57:16.003354   32277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 17:57:16.003378   32277 machine.go:96] duration metric: took 3.989079175s to provisionDockerMachine
	I0819 17:57:16.003387   32277 client.go:171] duration metric: took 16.643386931s to LocalClient.Create
	I0819 17:57:16.003405   32277 start.go:167] duration metric: took 16.643447497s to libmachine.API.Create "addons-142951"
	I0819 17:57:16.003412   32277 start.go:293] postStartSetup for "addons-142951" (driver="docker")
	I0819 17:57:16.003420   32277 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 17:57:16.003466   32277 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 17:57:16.003496   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:16.018955   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:16.105640   32277 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 17:57:16.108645   32277 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 17:57:16.108668   32277 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 17:57:16.108676   32277 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 17:57:16.108688   32277 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 17:57:16.108699   32277 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-24160/.minikube/addons for local assets ...
	I0819 17:57:16.108759   32277 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-24160/.minikube/files for local assets ...
	I0819 17:57:16.108782   32277 start.go:296] duration metric: took 105.365839ms for postStartSetup
	I0819 17:57:16.109056   32277 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-142951
	I0819 17:57:16.125539   32277 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/config.json ...
	I0819 17:57:16.125849   32277 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:57:16.125917   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:16.142158   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:16.225618   32277 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 17:57:16.229551   32277 start.go:128] duration metric: took 16.871632525s to createHost
	I0819 17:57:16.229572   32277 start.go:83] releasing machines lock for "addons-142951", held for 16.871779426s
	I0819 17:57:16.229632   32277 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-142951
	I0819 17:57:16.245668   32277 ssh_runner.go:195] Run: cat /version.json
	I0819 17:57:16.245704   32277 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 17:57:16.245713   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:16.245756   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:16.262132   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:16.262808   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:16.344320   32277 ssh_runner.go:195] Run: systemctl --version
	I0819 17:57:16.348183   32277 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 17:57:16.483148   32277 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 17:57:16.487119   32277 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:57:16.503330   32277 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0819 17:57:16.503429   32277 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:57:16.528315   32277 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0819 17:57:16.528338   32277 start.go:495] detecting cgroup driver to use...
	I0819 17:57:16.528366   32277 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 17:57:16.528398   32277 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 17:57:16.540890   32277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 17:57:16.549859   32277 docker.go:217] disabling cri-docker service (if available) ...
	I0819 17:57:16.549906   32277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 17:57:16.561072   32277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 17:57:16.572895   32277 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 17:57:16.648347   32277 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 17:57:16.721318   32277 docker.go:233] disabling docker service ...
	I0819 17:57:16.721419   32277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 17:57:16.737371   32277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 17:57:16.746775   32277 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 17:57:16.819702   32277 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 17:57:16.901287   32277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 17:57:16.911166   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 17:57:16.924587   32277 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 17:57:16.924631   32277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:57:16.932373   32277 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 17:57:16.932413   32277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:57:16.940905   32277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:57:16.949000   32277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:57:16.957081   32277 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 17:57:16.964475   32277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:57:16.972173   32277 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:57:16.984905   32277 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:57:16.992625   32277 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 17:57:16.999530   32277 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 17:57:17.006496   32277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:57:17.075749   32277 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 17:57:17.177579   32277 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 17:57:17.177653   32277 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 17:57:17.180650   32277 start.go:563] Will wait 60s for crictl version
	I0819 17:57:17.180699   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:57:17.183523   32277 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 17:57:17.214182   32277 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0819 17:57:17.214266   32277 ssh_runner.go:195] Run: crio --version
	I0819 17:57:17.246180   32277 ssh_runner.go:195] Run: crio --version
	I0819 17:57:17.278064   32277 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0819 17:57:17.279202   32277 cli_runner.go:164] Run: docker network inspect addons-142951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 17:57:17.294524   32277 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 17:57:17.297615   32277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:57:17.306968   32277 kubeadm.go:883] updating cluster {Name:addons-142951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-142951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 17:57:17.307092   32277 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:57:17.307132   32277 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:57:17.367715   32277 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:57:17.367733   32277 crio.go:433] Images already preloaded, skipping extraction
	I0819 17:57:17.367771   32277 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:57:17.396662   32277 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:57:17.396680   32277 cache_images.go:84] Images are preloaded, skipping loading
	I0819 17:57:17.396687   32277 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0819 17:57:17.396767   32277 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-142951 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-142951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 17:57:17.396823   32277 ssh_runner.go:195] Run: crio config
	I0819 17:57:17.434906   32277 cni.go:84] Creating CNI manager for ""
	I0819 17:57:17.434923   32277 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 17:57:17.434931   32277 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 17:57:17.434950   32277 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-142951 NodeName:addons-142951 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 17:57:17.435086   32277 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-142951"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 17:57:17.435141   32277 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 17:57:17.442656   32277 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 17:57:17.442718   32277 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 17:57:17.450771   32277 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0819 17:57:17.465967   32277 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 17:57:17.480566   32277 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0819 17:57:17.494846   32277 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0819 17:57:17.497697   32277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:57:17.506357   32277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:57:17.584274   32277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:57:17.595452   32277 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951 for IP: 192.168.49.2
	I0819 17:57:17.595472   32277 certs.go:194] generating shared ca certs ...
	I0819 17:57:17.595492   32277 certs.go:226] acquiring lock for ca certs: {Name:mk29d2f357e66b5ff77917021423cbbf2fc2a40f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:17.595622   32277 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-24160/.minikube/ca.key
	I0819 17:57:17.998787   32277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt ...
	I0819 17:57:17.998813   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt: {Name:mk892498a9d94f742583f9e4d4534f0a394cf1ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:17.998974   32277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-24160/.minikube/ca.key ...
	I0819 17:57:17.998985   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/ca.key: {Name:mk6308b41ac8dce85ac9fe41456952a216fd065b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:17.999056   32277 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.key
	I0819 17:57:18.304422   32277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.crt ...
	I0819 17:57:18.304455   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.crt: {Name:mk843459abb4914769f87c4f7b640341b16ad5d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:18.304622   32277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.key ...
	I0819 17:57:18.304633   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.key: {Name:mk01a481fe536041de09c79442a3c6ea5f83cc0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:18.304710   32277 certs.go:256] generating profile certs ...
	I0819 17:57:18.304758   32277 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.key
	I0819 17:57:18.304771   32277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt with IP's: []
	I0819 17:57:18.380640   32277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt ...
	I0819 17:57:18.380666   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: {Name:mk2d4a110123a4b16849e02b6ddee4f54ccaaace Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:18.380832   32277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.key ...
	I0819 17:57:18.380849   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.key: {Name:mkfc2b488b758c56830434db2c6360a7aab30347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:18.380950   32277 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.key.c28c9c86
	I0819 17:57:18.380971   32277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.crt.c28c9c86 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0819 17:57:18.445145   32277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.crt.c28c9c86 ...
	I0819 17:57:18.445169   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.crt.c28c9c86: {Name:mk7e9e5e75586a999b0884653ea24b1296f4ae1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:18.445335   32277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.key.c28c9c86 ...
	I0819 17:57:18.445352   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.key.c28c9c86: {Name:mkfd5dd2c92c97c19ab80158f609de57f9851b84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:18.445445   32277 certs.go:381] copying /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.crt.c28c9c86 -> /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.crt
	I0819 17:57:18.445516   32277 certs.go:385] copying /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.key.c28c9c86 -> /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.key
	I0819 17:57:18.445561   32277 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/proxy-client.key
	I0819 17:57:18.445578   32277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/proxy-client.crt with IP's: []
	I0819 17:57:18.743386   32277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/proxy-client.crt ...
	I0819 17:57:18.743412   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/proxy-client.crt: {Name:mkc0dc7b756d678b1a511ffdb986f487ea23bfa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:18.743592   32277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/proxy-client.key ...
	I0819 17:57:18.743605   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/proxy-client.key: {Name:mk2d4e405e0e43210449f6a3c33edcb8e99c8fea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:18.743792   32277 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 17:57:18.743827   32277 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem (1078 bytes)
	I0819 17:57:18.743850   32277 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/cert.pem (1123 bytes)
	I0819 17:57:18.743873   32277 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/key.pem (1679 bytes)
	I0819 17:57:18.744438   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 17:57:18.765673   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 17:57:18.785444   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 17:57:18.805040   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 17:57:18.825183   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 17:57:18.844572   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 17:57:18.864304   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 17:57:18.884116   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 17:57:18.904258   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 17:57:18.923594   32277 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 17:57:18.937820   32277 ssh_runner.go:195] Run: openssl version
	I0819 17:57:18.942475   32277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 17:57:18.950123   32277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:57:18.952939   32277 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:57:18.952973   32277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:57:18.958751   32277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 17:57:18.966040   32277 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:57:18.968577   32277 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 17:57:18.968616   32277 kubeadm.go:392] StartCluster: {Name:addons-142951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-142951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:57:18.968688   32277 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 17:57:18.968732   32277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 17:57:18.998867   32277 cri.go:89] found id: ""
	I0819 17:57:18.998916   32277 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 17:57:19.006203   32277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 17:57:19.013397   32277 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0819 17:57:19.013436   32277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 17:57:19.020348   32277 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 17:57:19.020366   32277 kubeadm.go:157] found existing configuration files:
	
	I0819 17:57:19.020398   32277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 17:57:19.027212   32277 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 17:57:19.027250   32277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 17:57:19.033818   32277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 17:57:19.040784   32277 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 17:57:19.040821   32277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 17:57:19.047589   32277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 17:57:19.054475   32277 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 17:57:19.054508   32277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 17:57:19.061216   32277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 17:57:19.068114   32277 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 17:57:19.068155   32277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 17:57:19.074833   32277 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0819 17:57:19.107947   32277 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 17:57:19.108017   32277 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 17:57:19.125993   32277 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0819 17:57:19.126104   32277 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1066-gcp
	I0819 17:57:19.126167   32277 kubeadm.go:310] OS: Linux
	I0819 17:57:19.126221   32277 kubeadm.go:310] CGROUPS_CPU: enabled
	I0819 17:57:19.126277   32277 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0819 17:57:19.126358   32277 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0819 17:57:19.126449   32277 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0819 17:57:19.126537   32277 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0819 17:57:19.126631   32277 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0819 17:57:19.126705   32277 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0819 17:57:19.126788   32277 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0819 17:57:19.126864   32277 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0819 17:57:19.176788   32277 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 17:57:19.176929   32277 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 17:57:19.177047   32277 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 17:57:19.182591   32277 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 17:57:19.184833   32277 out.go:235]   - Generating certificates and keys ...
	I0819 17:57:19.184936   32277 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 17:57:19.185018   32277 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 17:57:19.316059   32277 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 17:57:19.497248   32277 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 17:57:19.832276   32277 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 17:57:19.998862   32277 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 17:57:20.232100   32277 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 17:57:20.232258   32277 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-142951 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 17:57:20.315566   32277 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 17:57:20.315742   32277 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-142951 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 17:57:20.449530   32277 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 17:57:20.685085   32277 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 17:57:20.903096   32277 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 17:57:20.903191   32277 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 17:57:21.046144   32277 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 17:57:21.210289   32277 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 17:57:21.530383   32277 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 17:57:21.681941   32277 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 17:57:21.830596   32277 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 17:57:21.830943   32277 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 17:57:21.833285   32277 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 17:57:21.835335   32277 out.go:235]   - Booting up control plane ...
	I0819 17:57:21.835429   32277 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 17:57:21.835546   32277 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 17:57:21.835640   32277 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 17:57:21.843375   32277 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 17:57:21.848501   32277 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 17:57:21.848557   32277 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 17:57:21.922048   32277 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 17:57:21.922154   32277 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 17:57:22.423256   32277 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.288249ms
	I0819 17:57:22.423380   32277 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 17:57:26.924062   32277 kubeadm.go:310] [api-check] The API server is healthy after 4.500805691s
	I0819 17:57:26.933891   32277 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 17:57:26.941687   32277 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 17:57:26.957299   32277 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 17:57:26.957539   32277 kubeadm.go:310] [mark-control-plane] Marking the node addons-142951 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 17:57:26.964077   32277 kubeadm.go:310] [bootstrap-token] Using token: azxnvb.3v27aiuj1vv955cj
	I0819 17:57:26.966089   32277 out.go:235]   - Configuring RBAC rules ...
	I0819 17:57:26.966210   32277 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 17:57:26.968296   32277 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 17:57:26.973244   32277 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 17:57:26.975390   32277 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 17:57:26.977423   32277 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 17:57:26.980458   32277 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 17:57:27.329845   32277 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 17:57:27.742834   32277 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 17:57:28.329651   32277 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 17:57:28.330416   32277 kubeadm.go:310] 
	I0819 17:57:28.330505   32277 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 17:57:28.330515   32277 kubeadm.go:310] 
	I0819 17:57:28.330619   32277 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 17:57:28.330629   32277 kubeadm.go:310] 
	I0819 17:57:28.330681   32277 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 17:57:28.330776   32277 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 17:57:28.330856   32277 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 17:57:28.330866   32277 kubeadm.go:310] 
	I0819 17:57:28.330952   32277 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 17:57:28.330967   32277 kubeadm.go:310] 
	I0819 17:57:28.331040   32277 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 17:57:28.331051   32277 kubeadm.go:310] 
	I0819 17:57:28.331128   32277 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 17:57:28.331237   32277 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 17:57:28.331326   32277 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 17:57:28.331336   32277 kubeadm.go:310] 
	I0819 17:57:28.331458   32277 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 17:57:28.331561   32277 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 17:57:28.331573   32277 kubeadm.go:310] 
	I0819 17:57:28.331684   32277 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token azxnvb.3v27aiuj1vv955cj \
	I0819 17:57:28.331837   32277 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:59c63718fbc86a78511e804b1caaa3c322b35e7a3de8f3eb39f0bfe29aa00431 \
	I0819 17:57:28.331883   32277 kubeadm.go:310] 	--control-plane 
	I0819 17:57:28.331893   32277 kubeadm.go:310] 
	I0819 17:57:28.332006   32277 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 17:57:28.332016   32277 kubeadm.go:310] 
	I0819 17:57:28.332133   32277 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token azxnvb.3v27aiuj1vv955cj \
	I0819 17:57:28.332281   32277 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:59c63718fbc86a78511e804b1caaa3c322b35e7a3de8f3eb39f0bfe29aa00431 
	I0819 17:57:28.334006   32277 kubeadm.go:310] W0819 17:57:19.105666    1292 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 17:57:28.334257   32277 kubeadm.go:310] W0819 17:57:19.106240    1292 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 17:57:28.334443   32277 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1066-gcp\n", err: exit status 1
	I0819 17:57:28.334566   32277 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 17:57:28.334580   32277 cni.go:84] Creating CNI manager for ""
	I0819 17:57:28.334586   32277 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 17:57:28.336199   32277 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 17:57:28.337342   32277 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 17:57:28.340505   32277 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 17:57:28.340530   32277 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 17:57:28.355907   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 17:57:28.535858   32277 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 17:57:28.536017   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:57:28.536068   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-142951 minikube.k8s.io/updated_at=2024_08_19T17_57_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=addons-142951 minikube.k8s.io/primary=true
	I0819 17:57:28.542731   32277 ops.go:34] apiserver oom_adj: -16
	I0819 17:57:28.612555   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:57:29.113101   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:57:29.612824   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:57:30.113166   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:57:30.612923   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:57:31.113455   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:57:31.613599   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:57:32.113221   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:57:32.612910   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:57:32.670503   32277 kubeadm.go:1113] duration metric: took 4.134533777s to wait for elevateKubeSystemPrivileges
	I0819 17:57:32.670540   32277 kubeadm.go:394] duration metric: took 13.701926641s to StartCluster
	I0819 17:57:32.670563   32277 settings.go:142] acquiring lock: {Name:mkd30ec37009c3562b283392e8fb1c4131be31b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:32.670664   32277 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-24160/kubeconfig
	I0819 17:57:32.670984   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/kubeconfig: {Name:mk3fc9bc92b0be5459854fbe59603f93f92756ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:32.671151   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 17:57:32.671160   32277 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:57:32.671257   32277 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 17:57:32.671332   32277 config.go:182] Loaded profile config "addons-142951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:57:32.671366   32277 addons.go:69] Setting cloud-spanner=true in profile "addons-142951"
	I0819 17:57:32.671368   32277 addons.go:69] Setting default-storageclass=true in profile "addons-142951"
	I0819 17:57:32.671379   32277 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-142951"
	I0819 17:57:32.671384   32277 addons.go:69] Setting metrics-server=true in profile "addons-142951"
	I0819 17:57:32.671398   32277 addons.go:234] Setting addon cloud-spanner=true in "addons-142951"
	I0819 17:57:32.671401   32277 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-142951"
	I0819 17:57:32.671400   32277 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-142951"
	I0819 17:57:32.671399   32277 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-142951"
	I0819 17:57:32.671423   32277 addons.go:234] Setting addon metrics-server=true in "addons-142951"
	I0819 17:57:32.671410   32277 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-142951"
	I0819 17:57:32.671437   32277 addons.go:69] Setting volumesnapshots=true in profile "addons-142951"
	I0819 17:57:32.671437   32277 addons.go:69] Setting volcano=true in profile "addons-142951"
	I0819 17:57:32.671451   32277 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-142951"
	I0819 17:57:32.671454   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.671469   32277 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-142951"
	I0819 17:57:32.671474   32277 addons.go:234] Setting addon volcano=true in "addons-142951"
	I0819 17:57:32.671491   32277 addons.go:69] Setting registry=true in profile "addons-142951"
	I0819 17:57:32.671493   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.671500   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.671511   32277 addons.go:234] Setting addon registry=true in "addons-142951"
	I0819 17:57:32.671544   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.671562   32277 addons.go:69] Setting storage-provisioner=true in profile "addons-142951"
	I0819 17:57:32.671582   32277 addons.go:234] Setting addon storage-provisioner=true in "addons-142951"
	I0819 17:57:32.671591   32277 addons.go:69] Setting ingress=true in profile "addons-142951"
	I0819 17:57:32.671602   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.671621   32277 addons.go:234] Setting addon ingress=true in "addons-142951"
	I0819 17:57:32.671675   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.671785   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.671791   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.671950   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.671454   32277 addons.go:234] Setting addon volumesnapshots=true in "addons-142951"
	I0819 17:57:32.672001   32277 addons.go:69] Setting ingress-dns=true in profile "addons-142951"
	I0819 17:57:32.672023   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.672036   32277 addons.go:234] Setting addon ingress-dns=true in "addons-142951"
	I0819 17:57:32.672060   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.671425   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.672088   32277 addons.go:69] Setting gcp-auth=true in profile "addons-142951"
	I0819 17:57:32.672112   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.672128   32277 addons.go:69] Setting helm-tiller=true in profile "addons-142951"
	I0819 17:57:32.672154   32277 addons.go:234] Setting addon helm-tiller=true in "addons-142951"
	I0819 17:57:32.672179   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.671366   32277 addons.go:69] Setting yakd=true in profile "addons-142951"
	I0819 17:57:32.672410   32277 addons.go:234] Setting addon yakd=true in "addons-142951"
	I0819 17:57:32.672438   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.672536   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.672655   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.672698   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.671429   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.672840   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.673224   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.672071   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.677328   32277 out.go:177] * Verifying Kubernetes components...
	I0819 17:57:32.677531   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.671981   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.671960   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.672078   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.672117   32277 mustload.go:65] Loading cluster: addons-142951
	I0819 17:57:32.671986   32277 addons.go:69] Setting inspektor-gadget=true in profile "addons-142951"
	I0819 17:57:32.678684   32277 addons.go:234] Setting addon inspektor-gadget=true in "addons-142951"
	I0819 17:57:32.678722   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.679217   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.679330   32277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:57:32.701627   32277 config.go:182] Loaded profile config "addons-142951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:57:32.701945   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.719438   32277 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:57:32.719438   32277 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 17:57:32.720976   32277 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 17:57:32.721040   32277 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:57:32.721047   32277 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 17:57:32.721963   32277 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 17:57:32.721984   32277 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 17:57:32.722059   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.723062   32277 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 17:57:32.723512   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 17:57:32.723585   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.726105   32277 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 17:57:32.726107   32277 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 17:57:32.726207   32277 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 17:57:32.727327   32277 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 17:57:32.727342   32277 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 17:57:32.727358   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 17:57:32.727415   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.727568   32277 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 17:57:32.728722   32277 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 17:57:32.729086   32277 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 17:57:32.729100   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 17:57:32.729165   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.730237   32277 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 17:57:32.730376   32277 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 17:57:32.730388   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 17:57:32.730430   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.732185   32277 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 17:57:32.733293   32277 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 17:57:32.734323   32277 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 17:57:32.734931   32277 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-142951"
	I0819 17:57:32.734976   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.735359   32277 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 17:57:32.735409   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.738974   32277 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 17:57:32.739081   32277 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 17:57:32.739101   32277 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 17:57:32.739160   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.740153   32277 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 17:57:32.740171   32277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 17:57:32.740230   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.750583   32277 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 17:57:32.750696   32277 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 17:57:32.751803   32277 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 17:57:32.751828   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 17:57:32.751877   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.752185   32277 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 17:57:32.752204   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 17:57:32.752250   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.760898   32277 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0819 17:57:32.764077   32277 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0819 17:57:32.764102   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0819 17:57:32.764162   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.765303   32277 addons.go:234] Setting addon default-storageclass=true in "addons-142951"
	I0819 17:57:32.765346   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.765810   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.773148   32277 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 17:57:32.774810   32277 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 17:57:32.774833   32277 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 17:57:32.774897   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.785271   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.800919   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.801465   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.811101   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.811773   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.816295   32277 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 17:57:32.817438   32277 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 17:57:32.817456   32277 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 17:57:32.817526   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.819429   32277 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 17:57:32.819593   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.820652   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.823981   32277 out.go:177]   - Using image docker.io/busybox:stable
	I0819 17:57:32.825381   32277 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 17:57:32.825400   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 17:57:32.825456   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.827837   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.829928   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.831013   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	W0819 17:57:32.832664   32277 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0819 17:57:32.835809   32277 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 17:57:32.835822   32277 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 17:57:32.835861   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.836405   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.837992   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 17:57:32.847301   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.847301   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.849684   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.855066   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.957616   32277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:57:33.165902   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 17:57:33.175062   32277 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 17:57:33.175084   32277 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 17:57:33.258509   32277 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 17:57:33.258550   32277 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 17:57:33.259813   32277 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 17:57:33.259841   32277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 17:57:33.269815   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 17:57:33.270309   32277 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 17:57:33.270324   32277 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 17:57:33.273225   32277 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 17:57:33.273245   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 17:57:33.277593   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 17:57:33.277968   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 17:57:33.366699   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 17:57:33.378073   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 17:57:33.459712   32277 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 17:57:33.459785   32277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 17:57:33.464489   32277 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 17:57:33.464513   32277 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 17:57:33.464827   32277 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0819 17:57:33.464843   32277 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0819 17:57:33.465036   32277 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 17:57:33.465047   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 17:57:33.465897   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 17:57:33.477562   32277 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 17:57:33.477627   32277 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 17:57:33.479134   32277 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 17:57:33.479196   32277 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 17:57:33.573354   32277 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 17:57:33.573433   32277 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 17:57:33.770675   32277 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 17:57:33.770745   32277 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0819 17:57:33.772243   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 17:57:33.773452   32277 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 17:57:33.773472   32277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 17:57:33.864649   32277 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 17:57:33.864722   32277 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 17:57:33.871377   32277 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 17:57:33.871440   32277 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 17:57:33.958908   32277 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 17:57:33.958986   32277 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 17:57:33.962768   32277 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 17:57:33.962791   32277 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 17:57:34.074729   32277 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 17:57:34.074811   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 17:57:34.160017   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 17:57:34.179628   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 17:57:34.258604   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 17:57:34.259725   32277 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 17:57:34.259750   32277 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 17:57:34.265858   32277 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 17:57:34.265885   32277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 17:57:34.274508   32277 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.436483792s)
	I0819 17:57:34.274542   32277 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0819 17:57:34.275718   32277 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.318075078s)
	I0819 17:57:34.276596   32277 node_ready.go:35] waiting up to 6m0s for node "addons-142951" to be "Ready" ...
	I0819 17:57:34.358428   32277 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 17:57:34.358459   32277 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 17:57:34.657581   32277 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 17:57:34.657680   32277 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 17:57:34.674408   32277 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 17:57:34.674483   32277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 17:57:35.072906   32277 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 17:57:35.072975   32277 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 17:57:35.164428   32277 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:57:35.164505   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 17:57:35.167545   32277 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 17:57:35.167621   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 17:57:35.380870   32277 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 17:57:35.380952   32277 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 17:57:35.476478   32277 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 17:57:35.476563   32277 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 17:57:35.659290   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:57:35.670663   32277 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-142951" context rescaled to 1 replicas
	I0819 17:57:35.771284   32277 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 17:57:35.771367   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 17:57:35.775428   32277 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 17:57:35.775492   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 17:57:35.978208   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 17:57:36.174228   32277 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 17:57:36.174311   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 17:57:36.381679   32277 node_ready.go:53] node "addons-142951" has status "Ready":"False"
	I0819 17:57:36.478604   32277 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 17:57:36.478678   32277 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 17:57:36.771239   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 17:57:38.782980   32277 node_ready.go:53] node "addons-142951" has status "Ready":"False"
	I0819 17:57:39.275787   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.109845881s)
	I0819 17:57:39.275825   32277 addons.go:475] Verifying addon ingress=true in "addons-142951"
	I0819 17:57:39.275999   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.006102577s)
	I0819 17:57:39.276070   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.99808321s)
	I0819 17:57:39.276042   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.998422643s)
	I0819 17:57:39.276142   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.909366582s)
	I0819 17:57:39.276188   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.898025973s)
	I0819 17:57:39.276267   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.810351191s)
	I0819 17:57:39.276307   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.504010124s)
	I0819 17:57:39.276318   32277 addons.go:475] Verifying addon registry=true in "addons-142951"
	I0819 17:57:39.276348   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.116238002s)
	I0819 17:57:39.276407   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.096750562s)
	I0819 17:57:39.276555   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.017905424s)
	I0819 17:57:39.276579   32277 addons.go:475] Verifying addon metrics-server=true in "addons-142951"
	I0819 17:57:39.277262   32277 out.go:177] * Verifying ingress addon...
	I0819 17:57:39.278168   32277 out.go:177] * Verifying registry addon...
	I0819 17:57:39.278199   32277 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-142951 service yakd-dashboard -n yakd-dashboard
	
	I0819 17:57:39.279728   32277 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 17:57:39.281119   32277 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 17:57:39.287753   32277 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 17:57:39.287813   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:39.288003   32277 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 17:57:39.288023   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0819 17:57:39.360347   32277 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0819 17:57:39.783373   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:39.783807   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:40.061680   32277 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 17:57:40.061768   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:40.076423   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.098124831s)
	I0819 17:57:40.076572   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.417183138s)
	W0819 17:57:40.076619   32277 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 17:57:40.076652   32277 retry.go:31] will retry after 214.60233ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 17:57:40.082675   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:40.263534   32277 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 17:57:40.283312   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:40.283406   32277 addons.go:234] Setting addon gcp-auth=true in "addons-142951"
	I0819 17:57:40.283469   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:40.283792   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:40.283853   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:40.291432   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:57:40.304535   32277 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 17:57:40.304590   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:40.323802   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:40.598125   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.826789175s)
	I0819 17:57:40.598169   32277 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-142951"
	I0819 17:57:40.599653   32277 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 17:57:40.601695   32277 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 17:57:40.603926   32277 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 17:57:40.603940   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:40.782917   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:40.783514   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:41.161604   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:41.280921   32277 node_ready.go:53] node "addons-142951" has status "Ready":"False"
	I0819 17:57:41.283698   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:41.283795   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:41.605411   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:41.782618   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:41.783651   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:42.106231   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:42.283153   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:42.283351   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:42.605385   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:42.783700   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:42.784083   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:43.104301   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:43.260614   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.969137296s)
	I0819 17:57:43.260643   32277 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.956079566s)
	I0819 17:57:43.262461   32277 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:57:43.263628   32277 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 17:57:43.265034   32277 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 17:57:43.265059   32277 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 17:57:43.282844   32277 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 17:57:43.282864   32277 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 17:57:43.282920   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:43.283357   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:43.299098   32277 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 17:57:43.299121   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 17:57:43.315237   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 17:57:43.606663   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:43.778948   32277 node_ready.go:53] node "addons-142951" has status "Ready":"False"
	I0819 17:57:43.783539   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:43.784429   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:43.875331   32277 addons.go:475] Verifying addon gcp-auth=true in "addons-142951"
	I0819 17:57:43.876684   32277 out.go:177] * Verifying gcp-auth addon...
	I0819 17:57:43.878951   32277 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 17:57:43.885116   32277 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 17:57:43.885156   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:44.105015   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:44.282976   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:44.284026   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:44.382311   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:44.604697   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:44.783314   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:44.783355   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:44.883146   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:45.105203   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:45.282721   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:45.283568   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:45.381671   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:45.605392   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:45.779643   32277 node_ready.go:53] node "addons-142951" has status "Ready":"False"
	I0819 17:57:45.783205   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:45.783426   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:45.881749   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:46.105672   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:46.282974   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:46.283794   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:46.382031   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:46.604427   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:46.783034   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:46.783857   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:46.882571   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:47.105541   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:47.283021   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:47.283243   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:47.381597   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:47.605172   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:47.779687   32277 node_ready.go:53] node "addons-142951" has status "Ready":"False"
	I0819 17:57:47.782933   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:47.783260   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:47.881682   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:48.105293   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:48.282768   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:48.283031   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:48.382199   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:48.605106   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:48.782765   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:48.783891   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:48.882154   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:49.104784   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:49.282339   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:49.283377   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:49.381702   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:49.605179   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:49.782691   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:49.783661   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:49.881779   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:50.105489   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:50.279778   32277 node_ready.go:53] node "addons-142951" has status "Ready":"False"
	I0819 17:57:50.282303   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:50.283182   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:50.381346   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:50.604604   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:50.783070   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:50.783462   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:50.881654   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:51.105410   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:51.279451   32277 node_ready.go:49] node "addons-142951" has status "Ready":"True"
	I0819 17:57:51.279479   32277 node_ready.go:38] duration metric: took 17.002843434s for node "addons-142951" to be "Ready" ...
	I0819 17:57:51.279490   32277 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:57:51.284892   32277 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 17:57:51.284914   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:51.285706   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:51.287598   32277 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fc8vt" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:51.381615   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:51.605713   32277 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 17:57:51.605733   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:51.786707   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:51.786835   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:51.883638   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:52.106288   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:52.283599   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:52.283768   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:52.383436   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:52.606392   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:52.784211   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:52.784587   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:52.881975   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:53.106284   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:53.284061   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:53.284082   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:53.291576   32277 pod_ready.go:93] pod "coredns-6f6b679f8f-fc8vt" in "kube-system" namespace has status "Ready":"True"
	I0819 17:57:53.291595   32277 pod_ready.go:82] duration metric: took 2.003963947s for pod "coredns-6f6b679f8f-fc8vt" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.291618   32277 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-142951" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.295360   32277 pod_ready.go:93] pod "etcd-addons-142951" in "kube-system" namespace has status "Ready":"True"
	I0819 17:57:53.295377   32277 pod_ready.go:82] duration metric: took 3.753458ms for pod "etcd-addons-142951" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.295388   32277 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-142951" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.299720   32277 pod_ready.go:93] pod "kube-apiserver-addons-142951" in "kube-system" namespace has status "Ready":"True"
	I0819 17:57:53.299736   32277 pod_ready.go:82] duration metric: took 4.342068ms for pod "kube-apiserver-addons-142951" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.299747   32277 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-142951" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.303277   32277 pod_ready.go:93] pod "kube-controller-manager-addons-142951" in "kube-system" namespace has status "Ready":"True"
	I0819 17:57:53.303292   32277 pod_ready.go:82] duration metric: took 3.538469ms for pod "kube-controller-manager-addons-142951" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.303301   32277 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-q94sk" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.306438   32277 pod_ready.go:93] pod "kube-proxy-q94sk" in "kube-system" namespace has status "Ready":"True"
	I0819 17:57:53.306456   32277 pod_ready.go:82] duration metric: took 3.147987ms for pod "kube-proxy-q94sk" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.306465   32277 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-142951" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.381797   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:53.605681   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:53.690272   32277 pod_ready.go:93] pod "kube-scheduler-addons-142951" in "kube-system" namespace has status "Ready":"True"
	I0819 17:57:53.690295   32277 pod_ready.go:82] duration metric: took 383.821034ms for pod "kube-scheduler-addons-142951" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.690307   32277 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.783926   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:53.783925   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:53.881993   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:54.160304   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:54.284087   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:54.284413   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:54.382229   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:54.606437   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:54.783431   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:54.784576   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:54.882066   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:55.106444   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:55.283729   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:55.285215   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:55.381956   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:55.605773   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:55.695562   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:57:55.783858   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:55.783909   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:55.882955   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:56.105586   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:56.283698   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:56.283697   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:56.382127   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:56.605857   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:56.783133   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:56.784098   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:56.882613   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:57.106986   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:57.283744   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:57.283809   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:57.382122   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:57.661370   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:57.761948   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:57:57.783840   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:57.784594   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:57.882708   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:58.162177   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:58.283823   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:58.284193   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:58.382764   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:58.606798   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:58.784472   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:58.784623   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:58.882128   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:59.105646   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:59.285080   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:59.285438   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:59.382750   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:59.607013   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:59.783553   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:59.783738   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:59.882468   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:00.105737   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:00.195852   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:00.285239   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:00.285242   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:00.385036   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:00.605452   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:00.783819   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:00.785292   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:00.882860   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:01.106799   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:01.284180   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:01.284275   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:01.382922   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:01.606398   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:01.784121   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:01.784700   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:01.883064   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:02.106536   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:02.284514   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:02.284610   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:02.382212   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:02.605266   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:02.695460   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:02.784222   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:02.784238   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:02.882411   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:03.106926   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:03.284028   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:03.284202   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:03.381890   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:03.605389   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:03.783985   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:03.784088   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:03.881786   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:04.106653   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:04.283660   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:04.284725   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:04.382326   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:04.605675   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:04.783864   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:04.784149   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:04.882388   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:05.163469   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:05.259246   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:05.283891   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:05.284489   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:05.381716   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:05.606736   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:05.784014   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:05.784036   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:05.882374   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:06.106522   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:06.283533   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:06.284393   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:06.383061   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:06.605832   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:06.784203   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:06.785737   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:06.881915   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:07.106538   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:07.283917   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:07.283918   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:07.383668   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:07.606361   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:07.695600   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:07.784072   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:07.784398   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:07.882479   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:08.105732   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:08.284066   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:08.284555   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:08.381725   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:08.605931   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:08.784101   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:08.784128   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:08.882183   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:09.105424   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:09.284435   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:09.284555   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:09.382179   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:09.606501   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:09.696118   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:09.783530   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:09.784242   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:09.883153   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:10.105734   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:10.284130   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:10.284403   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:10.382390   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:10.605990   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:10.784445   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:10.784870   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:10.882394   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:11.106389   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:11.283900   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:11.283950   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:11.381895   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:11.606066   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:11.783685   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:11.783946   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:11.881788   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:12.106254   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:12.195338   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:12.283693   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:12.283869   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:12.381878   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:12.605434   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:12.783118   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:12.783989   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:12.882526   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:13.105805   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:13.283589   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:13.283744   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:13.381816   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:13.605142   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:13.783801   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:13.784054   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:13.882412   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:14.105803   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:14.195744   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:14.284671   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:14.285749   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:14.382752   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:14.669466   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:14.866697   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:14.867989   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:14.964367   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:15.162584   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:15.372426   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:15.372772   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:15.458984   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:15.661342   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:15.784378   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:15.785113   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:15.882202   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:16.106230   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:16.284546   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:16.285002   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:16.382169   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:16.606167   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:16.695578   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:16.784446   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:16.784491   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:16.883532   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:17.106570   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:17.284007   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:17.285190   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:17.382586   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:17.606311   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:17.783847   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:17.784589   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:17.882406   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:18.106210   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:18.284511   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:18.284783   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:18.382200   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:18.606482   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:18.695824   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:18.783650   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:18.784719   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:18.882070   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:19.106342   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:19.284532   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:19.285003   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:19.383865   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:19.606452   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:19.783466   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:19.784445   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:19.882228   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:20.105407   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:20.284332   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:20.284588   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:20.383631   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:20.606204   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:20.695944   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:20.784199   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:20.784260   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:20.882985   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:21.105484   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:21.283267   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:21.286356   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:21.385631   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:21.606267   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:21.783975   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:21.784101   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:21.882388   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:22.106276   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:22.285469   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:22.285801   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:22.382694   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:22.606850   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:22.758972   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:22.784306   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:22.784360   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:22.882381   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:23.105508   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:23.283970   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:23.284128   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:23.383406   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:23.606089   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:23.782938   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:23.784586   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:23.881729   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:24.106940   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:24.284448   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:24.284798   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:24.383610   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:24.606471   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:24.784127   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:24.784184   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:24.882340   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:25.105574   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:25.196864   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:25.284874   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:25.285496   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:25.382296   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:25.606454   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:25.783873   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:25.784000   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:25.882222   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:26.105897   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:26.284181   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:26.285636   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:26.382337   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:26.605872   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:26.783672   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:26.783742   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:26.881951   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:27.106280   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:27.283935   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:27.285007   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:27.382568   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:27.606382   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:27.695131   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:27.783674   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:27.784021   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:27.882640   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:28.106891   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:28.283392   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:28.283741   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:28.382396   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:28.606796   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:28.784242   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:28.784380   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:28.882450   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:29.109480   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:29.284301   32277 kapi.go:107] duration metric: took 50.003181765s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 17:58:29.284588   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:29.382552   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:29.630400   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:29.695696   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:29.784186   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:29.882146   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:30.105776   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:30.283070   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:30.382943   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:30.605981   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:30.866609   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:30.883148   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:31.162115   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:31.284478   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:31.384117   32277 kapi.go:107] duration metric: took 47.505166867s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 17:58:31.385403   32277 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-142951 cluster.
	I0819 17:58:31.386562   32277 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 17:58:31.387674   32277 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 17:58:31.661911   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:31.764465   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:31.784429   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:32.161583   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:32.283906   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:32.661988   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:32.783565   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:33.106746   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:33.283351   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:33.605926   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:33.783752   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:34.106282   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:34.195709   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:34.284424   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:34.606799   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:34.783863   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:35.105912   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:35.284048   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:35.605731   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:35.783708   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:36.107688   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:36.197021   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:36.284162   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:36.661425   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:36.783864   32277 kapi.go:107] duration metric: took 57.504134819s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 17:58:37.106192   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:37.662531   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:38.106463   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:38.605899   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:38.695981   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:39.105528   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:39.606629   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:40.105235   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:40.605724   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:41.106811   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:41.196632   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:41.606014   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:42.106421   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:42.605546   32277 kapi.go:107] duration metric: took 1m2.003851904s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 17:58:42.607331   32277 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, nvidia-device-plugin, storage-provisioner, helm-tiller, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0819 17:58:42.608458   32277 addons.go:510] duration metric: took 1m9.937212955s for enable addons: enabled=[ingress-dns cloud-spanner nvidia-device-plugin storage-provisioner helm-tiller metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0819 17:58:43.695594   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:46.194810   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:48.195487   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:50.195627   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:52.195921   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:54.695446   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:56.695594   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:58.696153   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:01.194762   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:03.195871   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:05.695908   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:08.195682   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:10.695200   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:12.695785   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:14.785744   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:17.195000   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:19.195992   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:21.695558   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:23.695822   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:26.195163   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:28.695640   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:31.194980   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:31.695647   32277 pod_ready.go:93] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"True"
	I0819 17:59:31.695667   32277 pod_ready.go:82] duration metric: took 1m38.005353358s for pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace to be "Ready" ...
	I0819 17:59:31.695677   32277 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-bc72h" in "kube-system" namespace to be "Ready" ...
	I0819 17:59:31.699319   32277 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-bc72h" in "kube-system" namespace has status "Ready":"True"
	I0819 17:59:31.699335   32277 pod_ready.go:82] duration metric: took 3.65301ms for pod "nvidia-device-plugin-daemonset-bc72h" in "kube-system" namespace to be "Ready" ...
	I0819 17:59:31.699352   32277 pod_ready.go:39] duration metric: took 1m40.419848821s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:59:31.699367   32277 api_server.go:52] waiting for apiserver process to appear ...
	I0819 17:59:31.699393   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 17:59:31.699433   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 17:59:31.731403   32277 cri.go:89] found id: "5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350"
	I0819 17:59:31.731426   32277 cri.go:89] found id: ""
	I0819 17:59:31.731434   32277 logs.go:276] 1 containers: [5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350]
	I0819 17:59:31.731478   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:31.734590   32277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 17:59:31.734649   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 17:59:31.765868   32277 cri.go:89] found id: "bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f"
	I0819 17:59:31.765890   32277 cri.go:89] found id: ""
	I0819 17:59:31.765897   32277 logs.go:276] 1 containers: [bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f]
	I0819 17:59:31.765941   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:31.768992   32277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 17:59:31.769040   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 17:59:31.799498   32277 cri.go:89] found id: "bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8"
	I0819 17:59:31.799519   32277 cri.go:89] found id: ""
	I0819 17:59:31.799526   32277 logs.go:276] 1 containers: [bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8]
	I0819 17:59:31.799572   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:31.802512   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 17:59:31.802558   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 17:59:31.833492   32277 cri.go:89] found id: "a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c"
	I0819 17:59:31.833509   32277 cri.go:89] found id: ""
	I0819 17:59:31.833518   32277 logs.go:276] 1 containers: [a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c]
	I0819 17:59:31.833566   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:31.836521   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 17:59:31.836569   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 17:59:31.867200   32277 cri.go:89] found id: "da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97"
	I0819 17:59:31.867229   32277 cri.go:89] found id: ""
	I0819 17:59:31.867239   32277 logs.go:276] 1 containers: [da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97]
	I0819 17:59:31.867288   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:31.870451   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 17:59:31.870501   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 17:59:31.901646   32277 cri.go:89] found id: "7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664"
	I0819 17:59:31.901665   32277 cri.go:89] found id: ""
	I0819 17:59:31.901673   32277 logs.go:276] 1 containers: [7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664]
	I0819 17:59:31.901713   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:31.904652   32277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 17:59:31.904707   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 17:59:31.935321   32277 cri.go:89] found id: "f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf"
	I0819 17:59:31.935339   32277 cri.go:89] found id: ""
	I0819 17:59:31.935349   32277 logs.go:276] 1 containers: [f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf]
	I0819 17:59:31.935398   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:31.938352   32277 logs.go:123] Gathering logs for kube-controller-manager [7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664] ...
	I0819 17:59:31.938371   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664"
	I0819 17:59:31.993944   32277 logs.go:123] Gathering logs for container status ...
	I0819 17:59:31.993974   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 17:59:32.032005   32277 logs.go:123] Gathering logs for kubelet ...
	I0819 17:59:32.032033   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 17:59:32.098309   32277 logs.go:123] Gathering logs for describe nodes ...
	I0819 17:59:32.098347   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 17:59:32.191911   32277 logs.go:123] Gathering logs for kube-apiserver [5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350] ...
	I0819 17:59:32.191936   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350"
	I0819 17:59:32.234689   32277 logs.go:123] Gathering logs for kube-scheduler [a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c] ...
	I0819 17:59:32.234713   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c"
	I0819 17:59:32.270838   32277 logs.go:123] Gathering logs for kube-proxy [da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97] ...
	I0819 17:59:32.270867   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97"
	I0819 17:59:32.301703   32277 logs.go:123] Gathering logs for kindnet [f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf] ...
	I0819 17:59:32.301728   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf"
	I0819 17:59:32.339842   32277 logs.go:123] Gathering logs for CRI-O ...
	I0819 17:59:32.339868   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 17:59:32.411461   32277 logs.go:123] Gathering logs for dmesg ...
	I0819 17:59:32.411496   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 17:59:32.423149   32277 logs.go:123] Gathering logs for etcd [bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f] ...
	I0819 17:59:32.423171   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f"
	I0819 17:59:32.478030   32277 logs.go:123] Gathering logs for coredns [bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8] ...
	I0819 17:59:32.478060   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8"
	I0819 17:59:35.010801   32277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:59:35.024410   32277 api_server.go:72] duration metric: took 2m2.353215137s to wait for apiserver process to appear ...
	I0819 17:59:35.024435   32277 api_server.go:88] waiting for apiserver healthz status ...
	I0819 17:59:35.024464   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 17:59:35.024517   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 17:59:35.056105   32277 cri.go:89] found id: "5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350"
	I0819 17:59:35.056126   32277 cri.go:89] found id: ""
	I0819 17:59:35.056134   32277 logs.go:276] 1 containers: [5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350]
	I0819 17:59:35.056173   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:35.059369   32277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 17:59:35.059434   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 17:59:35.090794   32277 cri.go:89] found id: "bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f"
	I0819 17:59:35.090817   32277 cri.go:89] found id: ""
	I0819 17:59:35.090826   32277 logs.go:276] 1 containers: [bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f]
	I0819 17:59:35.090873   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:35.093994   32277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 17:59:35.094057   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 17:59:35.125952   32277 cri.go:89] found id: "bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8"
	I0819 17:59:35.125972   32277 cri.go:89] found id: ""
	I0819 17:59:35.125979   32277 logs.go:276] 1 containers: [bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8]
	I0819 17:59:35.126018   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:35.129227   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 17:59:35.129337   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 17:59:35.161349   32277 cri.go:89] found id: "a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c"
	I0819 17:59:35.161386   32277 cri.go:89] found id: ""
	I0819 17:59:35.161395   32277 logs.go:276] 1 containers: [a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c]
	I0819 17:59:35.161445   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:35.164682   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 17:59:35.164745   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 17:59:35.197815   32277 cri.go:89] found id: "da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97"
	I0819 17:59:35.197837   32277 cri.go:89] found id: ""
	I0819 17:59:35.197845   32277 logs.go:276] 1 containers: [da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97]
	I0819 17:59:35.197889   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:35.200963   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 17:59:35.201013   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 17:59:35.233247   32277 cri.go:89] found id: "7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664"
	I0819 17:59:35.233265   32277 cri.go:89] found id: ""
	I0819 17:59:35.233272   32277 logs.go:276] 1 containers: [7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664]
	I0819 17:59:35.233312   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:35.236597   32277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 17:59:35.236666   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 17:59:35.269189   32277 cri.go:89] found id: "f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf"
	I0819 17:59:35.269210   32277 cri.go:89] found id: ""
	I0819 17:59:35.269217   32277 logs.go:276] 1 containers: [f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf]
	I0819 17:59:35.269264   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:35.272443   32277 logs.go:123] Gathering logs for dmesg ...
	I0819 17:59:35.272462   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 17:59:35.283559   32277 logs.go:123] Gathering logs for etcd [bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f] ...
	I0819 17:59:35.283584   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f"
	I0819 17:59:35.341413   32277 logs.go:123] Gathering logs for kube-scheduler [a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c] ...
	I0819 17:59:35.341442   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c"
	I0819 17:59:35.379012   32277 logs.go:123] Gathering logs for kube-proxy [da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97] ...
	I0819 17:59:35.379041   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97"
	I0819 17:59:35.411198   32277 logs.go:123] Gathering logs for kube-controller-manager [7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664] ...
	I0819 17:59:35.411224   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664"
	I0819 17:59:35.466988   32277 logs.go:123] Gathering logs for kindnet [f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf] ...
	I0819 17:59:35.467021   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf"
	I0819 17:59:35.507149   32277 logs.go:123] Gathering logs for kubelet ...
	I0819 17:59:35.507185   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 17:59:35.572561   32277 logs.go:123] Gathering logs for describe nodes ...
	I0819 17:59:35.572595   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 17:59:35.665635   32277 logs.go:123] Gathering logs for kube-apiserver [5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350] ...
	I0819 17:59:35.665662   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350"
	I0819 17:59:35.707679   32277 logs.go:123] Gathering logs for coredns [bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8] ...
	I0819 17:59:35.707708   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8"
	I0819 17:59:35.742132   32277 logs.go:123] Gathering logs for CRI-O ...
	I0819 17:59:35.742158   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 17:59:35.815470   32277 logs.go:123] Gathering logs for container status ...
	I0819 17:59:35.815505   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 17:59:38.357695   32277 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 17:59:38.361111   32277 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0819 17:59:38.361947   32277 api_server.go:141] control plane version: v1.31.0
	I0819 17:59:38.361967   32277 api_server.go:131] duration metric: took 3.337527252s to wait for apiserver health ...
	I0819 17:59:38.361975   32277 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 17:59:38.361997   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 17:59:38.362043   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 17:59:38.393619   32277 cri.go:89] found id: "5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350"
	I0819 17:59:38.393636   32277 cri.go:89] found id: ""
	I0819 17:59:38.393644   32277 logs.go:276] 1 containers: [5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350]
	I0819 17:59:38.393689   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:38.396602   32277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 17:59:38.396652   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 17:59:38.426870   32277 cri.go:89] found id: "bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f"
	I0819 17:59:38.426893   32277 cri.go:89] found id: ""
	I0819 17:59:38.426901   32277 logs.go:276] 1 containers: [bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f]
	I0819 17:59:38.426943   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:38.430132   32277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 17:59:38.430189   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 17:59:38.461329   32277 cri.go:89] found id: "bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8"
	I0819 17:59:38.461345   32277 cri.go:89] found id: ""
	I0819 17:59:38.461352   32277 logs.go:276] 1 containers: [bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8]
	I0819 17:59:38.461389   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:38.464280   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 17:59:38.464326   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 17:59:38.495248   32277 cri.go:89] found id: "a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c"
	I0819 17:59:38.495268   32277 cri.go:89] found id: ""
	I0819 17:59:38.495278   32277 logs.go:276] 1 containers: [a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c]
	I0819 17:59:38.495318   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:38.498293   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 17:59:38.498348   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 17:59:38.529762   32277 cri.go:89] found id: "da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97"
	I0819 17:59:38.529787   32277 cri.go:89] found id: ""
	I0819 17:59:38.529797   32277 logs.go:276] 1 containers: [da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97]
	I0819 17:59:38.529840   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:38.532733   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 17:59:38.532778   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 17:59:38.563652   32277 cri.go:89] found id: "7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664"
	I0819 17:59:38.563673   32277 cri.go:89] found id: ""
	I0819 17:59:38.563682   32277 logs.go:276] 1 containers: [7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664]
	I0819 17:59:38.563734   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:38.566769   32277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 17:59:38.566817   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 17:59:38.598710   32277 cri.go:89] found id: "f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf"
	I0819 17:59:38.598733   32277 cri.go:89] found id: ""
	I0819 17:59:38.598742   32277 logs.go:276] 1 containers: [f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf]
	I0819 17:59:38.598792   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:38.601802   32277 logs.go:123] Gathering logs for describe nodes ...
	I0819 17:59:38.601824   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 17:59:38.688508   32277 logs.go:123] Gathering logs for kube-apiserver [5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350] ...
	I0819 17:59:38.688532   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350"
	I0819 17:59:38.731332   32277 logs.go:123] Gathering logs for kube-controller-manager [7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664] ...
	I0819 17:59:38.731357   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664"
	I0819 17:59:38.784927   32277 logs.go:123] Gathering logs for kindnet [f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf] ...
	I0819 17:59:38.784952   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf"
	I0819 17:59:38.821388   32277 logs.go:123] Gathering logs for CRI-O ...
	I0819 17:59:38.821412   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 17:59:38.892973   32277 logs.go:123] Gathering logs for container status ...
	I0819 17:59:38.892998   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 17:59:38.931248   32277 logs.go:123] Gathering logs for dmesg ...
	I0819 17:59:38.931272   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 17:59:38.942339   32277 logs.go:123] Gathering logs for etcd [bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f] ...
	I0819 17:59:38.942359   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f"
	I0819 17:59:38.997979   32277 logs.go:123] Gathering logs for coredns [bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8] ...
	I0819 17:59:38.998004   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8"
	I0819 17:59:39.031258   32277 logs.go:123] Gathering logs for kube-scheduler [a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c] ...
	I0819 17:59:39.031284   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c"
	I0819 17:59:39.067004   32277 logs.go:123] Gathering logs for kube-proxy [da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97] ...
	I0819 17:59:39.067030   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97"
	I0819 17:59:39.097168   32277 logs.go:123] Gathering logs for kubelet ...
	I0819 17:59:39.097196   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 17:59:41.669508   32277 system_pods.go:59] 19 kube-system pods found
	I0819 17:59:41.669534   32277 system_pods.go:61] "coredns-6f6b679f8f-fc8vt" [ebed1ffd-53d1-4366-bdc1-29fb14ddbefb] Running
	I0819 17:59:41.669539   32277 system_pods.go:61] "csi-hostpath-attacher-0" [fd596b06-341f-47f2-a8da-e7dc64f41141] Running
	I0819 17:59:41.669543   32277 system_pods.go:61] "csi-hostpath-resizer-0" [559fd1c1-bd37-424c-84e0-dc698b2aed5d] Running
	I0819 17:59:41.669547   32277 system_pods.go:61] "csi-hostpathplugin-dl2zv" [d5dbe9eb-40e4-493c-86e6-0b23dcd5368a] Running
	I0819 17:59:41.669551   32277 system_pods.go:61] "etcd-addons-142951" [d3b8f60a-0668-45e1-ab67-58b8f3bb4b6f] Running
	I0819 17:59:41.669555   32277 system_pods.go:61] "kindnet-v2xdp" [d80fcb4a-57b1-4a3f-a374-cf3eb49eaad9] Running
	I0819 17:59:41.669558   32277 system_pods.go:61] "kube-apiserver-addons-142951" [647bfa2f-59b3-40c4-9441-6c585868606c] Running
	I0819 17:59:41.669562   32277 system_pods.go:61] "kube-controller-manager-addons-142951" [fcf6b86a-dc2d-481c-b766-4acd5fabca72] Running
	I0819 17:59:41.669565   32277 system_pods.go:61] "kube-ingress-dns-minikube" [c80a5549-ce5e-4dcd-adca-60388c15eb01] Running
	I0819 17:59:41.669568   32277 system_pods.go:61] "kube-proxy-q94sk" [67c62ce2-b009-4e1e-b458-a932b2d8bda0] Running
	I0819 17:59:41.669572   32277 system_pods.go:61] "kube-scheduler-addons-142951" [62d2cbec-17d2-4363-a160-36caaa89544a] Running
	I0819 17:59:41.669575   32277 system_pods.go:61] "metrics-server-8988944d9-hggkq" [0dca4d1b-5042-4c63-b3e2-04f12c5f19a8] Running
	I0819 17:59:41.669578   32277 system_pods.go:61] "nvidia-device-plugin-daemonset-bc72h" [1afb0b8d-3754-410e-886b-723b6ec99725] Running
	I0819 17:59:41.669582   32277 system_pods.go:61] "registry-6fb4cdfc84-mflg4" [7a8a2fd6-50f4-4941-a77a-aa97fe6fde07] Running
	I0819 17:59:41.669587   32277 system_pods.go:61] "registry-proxy-cpszr" [4104108c-9aa8-4ddc-b4ab-13ffb2364b83] Running
	I0819 17:59:41.669590   32277 system_pods.go:61] "snapshot-controller-56fcc65765-bm9q4" [be809ad2-1210-4bd5-9d06-c1fc540796ef] Running
	I0819 17:59:41.669592   32277 system_pods.go:61] "snapshot-controller-56fcc65765-mrg2k" [9bb61f05-1178-4693-827c-8ec9467bb365] Running
	I0819 17:59:41.669596   32277 system_pods.go:61] "storage-provisioner" [22cafd60-bf3d-43f0-89cd-7cd1ed607e0a] Running
	I0819 17:59:41.669601   32277 system_pods.go:61] "tiller-deploy-b48cc5f79-gjp98" [c259324f-94be-46a4-9f28-bb1278b517b6] Running
	I0819 17:59:41.669608   32277 system_pods.go:74] duration metric: took 3.307626785s to wait for pod list to return data ...
	I0819 17:59:41.669616   32277 default_sa.go:34] waiting for default service account to be created ...
	I0819 17:59:41.671548   32277 default_sa.go:45] found service account: "default"
	I0819 17:59:41.671570   32277 default_sa.go:55] duration metric: took 1.948088ms for default service account to be created ...
	I0819 17:59:41.671577   32277 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 17:59:41.678997   32277 system_pods.go:86] 19 kube-system pods found
	I0819 17:59:41.679017   32277 system_pods.go:89] "coredns-6f6b679f8f-fc8vt" [ebed1ffd-53d1-4366-bdc1-29fb14ddbefb] Running
	I0819 17:59:41.679024   32277 system_pods.go:89] "csi-hostpath-attacher-0" [fd596b06-341f-47f2-a8da-e7dc64f41141] Running
	I0819 17:59:41.679028   32277 system_pods.go:89] "csi-hostpath-resizer-0" [559fd1c1-bd37-424c-84e0-dc698b2aed5d] Running
	I0819 17:59:41.679032   32277 system_pods.go:89] "csi-hostpathplugin-dl2zv" [d5dbe9eb-40e4-493c-86e6-0b23dcd5368a] Running
	I0819 17:59:41.679035   32277 system_pods.go:89] "etcd-addons-142951" [d3b8f60a-0668-45e1-ab67-58b8f3bb4b6f] Running
	I0819 17:59:41.679038   32277 system_pods.go:89] "kindnet-v2xdp" [d80fcb4a-57b1-4a3f-a374-cf3eb49eaad9] Running
	I0819 17:59:41.679042   32277 system_pods.go:89] "kube-apiserver-addons-142951" [647bfa2f-59b3-40c4-9441-6c585868606c] Running
	I0819 17:59:41.679045   32277 system_pods.go:89] "kube-controller-manager-addons-142951" [fcf6b86a-dc2d-481c-b766-4acd5fabca72] Running
	I0819 17:59:41.679049   32277 system_pods.go:89] "kube-ingress-dns-minikube" [c80a5549-ce5e-4dcd-adca-60388c15eb01] Running
	I0819 17:59:41.679055   32277 system_pods.go:89] "kube-proxy-q94sk" [67c62ce2-b009-4e1e-b458-a932b2d8bda0] Running
	I0819 17:59:41.679058   32277 system_pods.go:89] "kube-scheduler-addons-142951" [62d2cbec-17d2-4363-a160-36caaa89544a] Running
	I0819 17:59:41.679062   32277 system_pods.go:89] "metrics-server-8988944d9-hggkq" [0dca4d1b-5042-4c63-b3e2-04f12c5f19a8] Running
	I0819 17:59:41.679066   32277 system_pods.go:89] "nvidia-device-plugin-daemonset-bc72h" [1afb0b8d-3754-410e-886b-723b6ec99725] Running
	I0819 17:59:41.679069   32277 system_pods.go:89] "registry-6fb4cdfc84-mflg4" [7a8a2fd6-50f4-4941-a77a-aa97fe6fde07] Running
	I0819 17:59:41.679072   32277 system_pods.go:89] "registry-proxy-cpszr" [4104108c-9aa8-4ddc-b4ab-13ffb2364b83] Running
	I0819 17:59:41.679075   32277 system_pods.go:89] "snapshot-controller-56fcc65765-bm9q4" [be809ad2-1210-4bd5-9d06-c1fc540796ef] Running
	I0819 17:59:41.679079   32277 system_pods.go:89] "snapshot-controller-56fcc65765-mrg2k" [9bb61f05-1178-4693-827c-8ec9467bb365] Running
	I0819 17:59:41.679083   32277 system_pods.go:89] "storage-provisioner" [22cafd60-bf3d-43f0-89cd-7cd1ed607e0a] Running
	I0819 17:59:41.679086   32277 system_pods.go:89] "tiller-deploy-b48cc5f79-gjp98" [c259324f-94be-46a4-9f28-bb1278b517b6] Running
	I0819 17:59:41.679092   32277 system_pods.go:126] duration metric: took 7.510262ms to wait for k8s-apps to be running ...
	I0819 17:59:41.679100   32277 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 17:59:41.679137   32277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:59:41.690170   32277 system_svc.go:56] duration metric: took 11.064294ms WaitForService to wait for kubelet
	I0819 17:59:41.690193   32277 kubeadm.go:582] duration metric: took 2m9.019003469s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:59:41.690215   32277 node_conditions.go:102] verifying NodePressure condition ...
	I0819 17:59:41.693301   32277 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0819 17:59:41.693330   32277 node_conditions.go:123] node cpu capacity is 8
	I0819 17:59:41.693345   32277 node_conditions.go:105] duration metric: took 3.124766ms to run NodePressure ...
	I0819 17:59:41.693358   32277 start.go:241] waiting for startup goroutines ...
	I0819 17:59:41.693368   32277 start.go:246] waiting for cluster config update ...
	I0819 17:59:41.693387   32277 start.go:255] writing updated cluster config ...
	I0819 17:59:41.693719   32277 ssh_runner.go:195] Run: rm -f paused
	I0819 17:59:41.739546   32277 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 17:59:41.741660   32277 out.go:177] * Done! kubectl is now configured to use "addons-142951" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 18:02:48 addons-142951 crio[1030]: time="2024-08-19 18:02:48.960882195Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/55643c67296c0306d6c942755c976fba48e836b9106c230eb413c0458bad2b91/merged/etc/passwd: no such file or directory"
	Aug 19 18:02:48 addons-142951 crio[1030]: time="2024-08-19 18:02:48.960911721Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/55643c67296c0306d6c942755c976fba48e836b9106c230eb413c0458bad2b91/merged/etc/group: no such file or directory"
	Aug 19 18:02:48 addons-142951 crio[1030]: time="2024-08-19 18:02:48.983488063Z" level=info msg="Stopped container 13518526319de402a4cdff6b1fb1bc6bd90a44e13d20b8ecccc67e95c614f271: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=b4922f40-7ea1-4660-be6d-97b1953c2864 name=/runtime.v1.RuntimeService/StopContainer
	Aug 19 18:02:48 addons-142951 crio[1030]: time="2024-08-19 18:02:48.983989819Z" level=info msg="Stopping pod sandbox: 93712d78564861d8f3524169666e2ba81bb13bdff48583f6d5e3b3a31fe44961" id=b038189d-89b3-49bd-b9b0-1d36ed3a8437 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:02:48 addons-142951 crio[1030]: time="2024-08-19 18:02:48.984900982Z" level=info msg="Stopped pod sandbox: 93712d78564861d8f3524169666e2ba81bb13bdff48583f6d5e3b3a31fe44961" id=b038189d-89b3-49bd-b9b0-1d36ed3a8437 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:02:48 addons-142951 crio[1030]: time="2024-08-19 18:02:48.995484983Z" level=info msg="Created container 8b2a4e6e93905f4a4b141fac73dc86b7cc5fefe94ecc5d153f79e2fd2b680074: default/hello-world-app-55bf9c44b4-pxt4b/hello-world-app" id=10579a29-f645-48f6-a8fb-5cd39f6d5222 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 19 18:02:48 addons-142951 crio[1030]: time="2024-08-19 18:02:48.995974101Z" level=info msg="Starting container: 8b2a4e6e93905f4a4b141fac73dc86b7cc5fefe94ecc5d153f79e2fd2b680074" id=c323da9a-1950-4459-bd5d-5f4678774a96 name=/runtime.v1.RuntimeService/StartContainer
	Aug 19 18:02:49 addons-142951 crio[1030]: time="2024-08-19 18:02:49.001937691Z" level=info msg="Started container" PID=11175 containerID=8b2a4e6e93905f4a4b141fac73dc86b7cc5fefe94ecc5d153f79e2fd2b680074 description=default/hello-world-app-55bf9c44b4-pxt4b/hello-world-app id=c323da9a-1950-4459-bd5d-5f4678774a96 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bac4fbb575a8b2b7a3fe936983e9651fc62bb7454db82674bf09bb5c1925c062
	Aug 19 18:02:49 addons-142951 crio[1030]: time="2024-08-19 18:02:49.015913492Z" level=info msg="Removing container: 13518526319de402a4cdff6b1fb1bc6bd90a44e13d20b8ecccc67e95c614f271" id=fd408e66-6245-4e5a-aa13-49ffc4bffbdc name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 18:02:49 addons-142951 crio[1030]: time="2024-08-19 18:02:49.029085129Z" level=info msg="Removed container 13518526319de402a4cdff6b1fb1bc6bd90a44e13d20b8ecccc67e95c614f271: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=fd408e66-6245-4e5a-aa13-49ffc4bffbdc name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 18:02:50 addons-142951 crio[1030]: time="2024-08-19 18:02:50.810892898Z" level=info msg="Stopping container: e53684b8506edaa8e314693fe8a0fe0057187e4b6454777c04ae6de283f278e2 (timeout: 2s)" id=06255c76-978b-4578-b29c-c900bdc1473b name=/runtime.v1.RuntimeService/StopContainer
	Aug 19 18:02:52 addons-142951 crio[1030]: time="2024-08-19 18:02:52.817342023Z" level=warning msg="Stopping container e53684b8506edaa8e314693fe8a0fe0057187e4b6454777c04ae6de283f278e2 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=06255c76-978b-4578-b29c-c900bdc1473b name=/runtime.v1.RuntimeService/StopContainer
	Aug 19 18:02:52 addons-142951 conmon[5671]: conmon e53684b8506edaa8e314 <ninfo>: container 5683 exited with status 137
	Aug 19 18:02:52 addons-142951 crio[1030]: time="2024-08-19 18:02:52.947993915Z" level=info msg="Stopped container e53684b8506edaa8e314693fe8a0fe0057187e4b6454777c04ae6de283f278e2: ingress-nginx/ingress-nginx-controller-bc57996ff-9qvr6/controller" id=06255c76-978b-4578-b29c-c900bdc1473b name=/runtime.v1.RuntimeService/StopContainer
	Aug 19 18:02:52 addons-142951 crio[1030]: time="2024-08-19 18:02:52.948512435Z" level=info msg="Stopping pod sandbox: 8e99b220ce93a9c72be8b7aadd4059ae49cb5879876f35ec0546b39464f90427" id=03aaf25e-d8c1-4ef4-8021-dc4fd1a8f485 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:02:52 addons-142951 crio[1030]: time="2024-08-19 18:02:52.951846000Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-D3BL37ZWGP2N4Z2K - [0:0]\n:KUBE-HP-A2MMPC7B3XMMTY74 - [0:0]\n-X KUBE-HP-A2MMPC7B3XMMTY74\n-X KUBE-HP-D3BL37ZWGP2N4Z2K\nCOMMIT\n"
	Aug 19 18:02:52 addons-142951 crio[1030]: time="2024-08-19 18:02:52.953167517Z" level=info msg="Closing host port tcp:80"
	Aug 19 18:02:52 addons-142951 crio[1030]: time="2024-08-19 18:02:52.953209450Z" level=info msg="Closing host port tcp:443"
	Aug 19 18:02:52 addons-142951 crio[1030]: time="2024-08-19 18:02:52.954553634Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 19 18:02:52 addons-142951 crio[1030]: time="2024-08-19 18:02:52.954575566Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 19 18:02:52 addons-142951 crio[1030]: time="2024-08-19 18:02:52.954708297Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-9qvr6 Namespace:ingress-nginx ID:8e99b220ce93a9c72be8b7aadd4059ae49cb5879876f35ec0546b39464f90427 UID:898edbc7-d5f2-4485-b388-f62671567457 NetNS:/var/run/netns/a41294f5-47d0-48d7-86cb-f1e22cc0f241 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 19 18:02:52 addons-142951 crio[1030]: time="2024-08-19 18:02:52.954828530Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-9qvr6 from CNI network \"kindnet\" (type=ptp)"
	Aug 19 18:02:52 addons-142951 crio[1030]: time="2024-08-19 18:02:52.986603032Z" level=info msg="Stopped pod sandbox: 8e99b220ce93a9c72be8b7aadd4059ae49cb5879876f35ec0546b39464f90427" id=03aaf25e-d8c1-4ef4-8021-dc4fd1a8f485 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:02:53 addons-142951 crio[1030]: time="2024-08-19 18:02:53.025741587Z" level=info msg="Removing container: e53684b8506edaa8e314693fe8a0fe0057187e4b6454777c04ae6de283f278e2" id=e715886f-e641-40e6-b8b9-4166f7e12111 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 18:02:53 addons-142951 crio[1030]: time="2024-08-19 18:02:53.037414881Z" level=info msg="Removed container e53684b8506edaa8e314693fe8a0fe0057187e4b6454777c04ae6de283f278e2: ingress-nginx/ingress-nginx-controller-bc57996ff-9qvr6/controller" id=e715886f-e641-40e6-b8b9-4166f7e12111 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8b2a4e6e93905       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        8 seconds ago       Running             hello-world-app           0                   bac4fbb575a8b       hello-world-app-55bf9c44b4-pxt4b
	a9a83851c9eee       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                              2 minutes ago       Running             nginx                     0                   00d7d6838f8fb       nginx
	6240ede9cf582       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   f3567280887ba       busybox
	1faa132901086       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   a27b8af2ecf8e       metrics-server-8988944d9-hggkq
	1aa875218626f       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   e0c608ee1cefd       local-path-provisioner-86d989889c-lr4nz
	1fc3551382099       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             4 minutes ago       Exited              patch                     2                   b8ac4673d7b6f       ingress-nginx-admission-patch-z9wtf
	c31e1c0337f3b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   5 minutes ago       Exited              create                    0                   a02a29c77f039       ingress-nginx-admission-create-n4fmk
	41cc2fea90c47       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   1e757acdc6536       storage-provisioner
	bdd3e647d13fe       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   cfc61b5cb5a0b       coredns-6f6b679f8f-fc8vt
	f1d4a608c4ff6       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b                           5 minutes ago       Running             kindnet-cni               0                   3a761bb9d42a7       kindnet-v2xdp
	da12ebabc01a5       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             5 minutes ago       Running             kube-proxy                0                   d6058c2c1b050       kube-proxy-q94sk
	7858ffc81956b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             5 minutes ago       Running             kube-controller-manager   0                   0ba9d35fb0d2d       kube-controller-manager-addons-142951
	5bd5f680a9f96       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             5 minutes ago       Running             kube-apiserver            0                   2ec493c53f5ca       kube-apiserver-addons-142951
	a43a7b5c45d60       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             5 minutes ago       Running             kube-scheduler            0                   5e197d657734a       kube-scheduler-addons-142951
	bad09b5f8d830       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   6dcfa81a1bfdf       etcd-addons-142951
	
	
	==> coredns [bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8] <==
	[INFO] 10.244.0.18:41099 - 13528 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000087104s
	[INFO] 10.244.0.18:34639 - 26503 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005034601s
	[INFO] 10.244.0.18:34639 - 18171 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005828257s
	[INFO] 10.244.0.18:53556 - 31976 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004574803s
	[INFO] 10.244.0.18:53556 - 47083 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004782343s
	[INFO] 10.244.0.18:48640 - 17596 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003592381s
	[INFO] 10.244.0.18:48640 - 21177 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00527288s
	[INFO] 10.244.0.18:53493 - 56716 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000062911s
	[INFO] 10.244.0.18:53493 - 21896 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000088166s
	[INFO] 10.244.0.21:42460 - 18099 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000178422s
	[INFO] 10.244.0.21:40118 - 21161 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000255157s
	[INFO] 10.244.0.21:36980 - 5595 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122751s
	[INFO] 10.244.0.21:39783 - 39206 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00017752s
	[INFO] 10.244.0.21:49330 - 10245 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000086237s
	[INFO] 10.244.0.21:53682 - 8635 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117262s
	[INFO] 10.244.0.21:34287 - 20837 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.005155218s
	[INFO] 10.244.0.21:55193 - 6962 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.005241981s
	[INFO] 10.244.0.21:40647 - 44533 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004754668s
	[INFO] 10.244.0.21:49338 - 18777 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005432948s
	[INFO] 10.244.0.21:41251 - 37701 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004544251s
	[INFO] 10.244.0.21:60483 - 40387 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004485269s
	[INFO] 10.244.0.21:51921 - 39378 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000706804s
	[INFO] 10.244.0.21:35130 - 14324 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000814019s
	[INFO] 10.244.0.24:42753 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000232013s
	[INFO] 10.244.0.24:36499 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000148211s
	
	
	==> describe nodes <==
	Name:               addons-142951
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-142951
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=addons-142951
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T17_57_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-142951
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:57:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-142951
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:02:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:01:01 +0000   Mon, 19 Aug 2024 17:57:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:01:01 +0000   Mon, 19 Aug 2024 17:57:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:01:01 +0000   Mon, 19 Aug 2024 17:57:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:01:01 +0000   Mon, 19 Aug 2024 17:57:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-142951
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac13505620b442b6bb748f645ef91266
	  System UUID:                a1df2279-d565-4b6c-bce8-72ba674e5fd0
	  Boot ID:                    78fba809-e96d-46e8-9b80-0c45215ddcd4
	  Kernel Version:             5.15.0-1066-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m15s
	  default                     hello-world-app-55bf9c44b4-pxt4b           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 coredns-6f6b679f8f-fc8vt                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m24s
	  kube-system                 etcd-addons-142951                         100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m30s
	  kube-system                 kindnet-v2xdp                              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m24s
	  kube-system                 kube-apiserver-addons-142951               250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-controller-manager-addons-142951      200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-proxy-q94sk                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-scheduler-addons-142951               100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 metrics-server-8988944d9-hggkq             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         5m20s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  local-path-storage          local-path-provisioner-86d989889c-lr4nz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m20s  kube-proxy       
	  Normal   Starting                 5m30s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m30s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m30s  kubelet          Node addons-142951 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m30s  kubelet          Node addons-142951 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m30s  kubelet          Node addons-142951 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m25s  node-controller  Node addons-142951 event: Registered Node addons-142951 in Controller
	  Normal   NodeReady                5m6s   kubelet          Node addons-142951 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000606] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000604] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000620] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000610] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.563404] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.048454] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.005566] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.011360] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002293] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.012653] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +7.070628] kauditd_printk_skb: 46 callbacks suppressed
	[Aug19 18:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 16 e1 04 f8 ee 69 3e 60 83 23 0d 27 08 00
	[  +1.020545] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 16 e1 04 f8 ee 69 3e 60 83 23 0d 27 08 00
	[  +2.015804] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 16 e1 04 f8 ee 69 3e 60 83 23 0d 27 08 00
	[  +4.031615] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 16 e1 04 f8 ee 69 3e 60 83 23 0d 27 08 00
	[  +8.191254] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 16 e1 04 f8 ee 69 3e 60 83 23 0d 27 08 00
	[Aug19 18:01] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 16 e1 04 f8 ee 69 3e 60 83 23 0d 27 08 00
	[ +33.528823] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 16 e1 04 f8 ee 69 3e 60 83 23 0d 27 08 00
	
	
	==> etcd [bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f] <==
	{"level":"info","ts":"2024-08-19T17:57:37.260780Z","caller":"traceutil/trace.go:171","msg":"trace[1468218613] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"184.04039ms","start":"2024-08-19T17:57:37.076726Z","end":"2024-08-19T17:57:37.260766Z","steps":["trace[1468218613] 'process raft request'  (duration: 182.899769ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:57:37.265778Z","caller":"traceutil/trace.go:171","msg":"trace[1602020941] linearizableReadLoop","detail":"{readStateIndex:465; appliedIndex:463; }","duration":"106.076241ms","start":"2024-08-19T17:57:37.159691Z","end":"2024-08-19T17:57:37.265767Z","steps":["trace[1602020941] 'read index received'  (duration: 101.222768ms)","trace[1602020941] 'applied index is now lower than readState.Index'  (duration: 4.852994ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T17:57:37.265885Z","caller":"traceutil/trace.go:171","msg":"trace[2127705441] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"102.352146ms","start":"2024-08-19T17:57:37.163451Z","end":"2024-08-19T17:57:37.265803Z","steps":["trace[2127705441] 'process raft request'  (duration: 102.142904ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:57:37.266100Z","caller":"traceutil/trace.go:171","msg":"trace[1010502430] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"106.639352ms","start":"2024-08-19T17:57:37.159451Z","end":"2024-08-19T17:57:37.266090Z","steps":["trace[1010502430] 'process raft request'  (duration: 105.94225ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:57:37.266291Z","caller":"traceutil/trace.go:171","msg":"trace[1349800991] transaction","detail":"{read_only:false; response_revision:453; number_of_response:1; }","duration":"106.641811ms","start":"2024-08-19T17:57:37.159641Z","end":"2024-08-19T17:57:37.266283Z","steps":["trace[1349800991] 'process raft request'  (duration: 105.873765ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:57:37.266467Z","caller":"traceutil/trace.go:171","msg":"trace[1516627403] transaction","detail":"{read_only:false; response_revision:454; number_of_response:1; }","duration":"106.63181ms","start":"2024-08-19T17:57:37.159826Z","end":"2024-08-19T17:57:37.266458Z","steps":["trace[1516627403] 'process raft request'  (duration: 105.736115ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:57:37.265905Z","caller":"traceutil/trace.go:171","msg":"trace[995492955] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"102.246326ms","start":"2024-08-19T17:57:37.163651Z","end":"2024-08-19T17:57:37.265897Z","steps":["trace[995492955] 'process raft request'  (duration: 101.96598ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:57:37.266299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.593549ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/tiller-deploy\" ","response":"range_response_count:1 size:4640"}
	{"level":"info","ts":"2024-08-19T17:57:37.266669Z","caller":"traceutil/trace.go:171","msg":"trace[1136239668] range","detail":"{range_begin:/registry/deployments/kube-system/tiller-deploy; range_end:; response_count:1; response_revision:460; }","duration":"106.968924ms","start":"2024-08-19T17:57:37.159689Z","end":"2024-08-19T17:57:37.266658Z","steps":["trace[1136239668] 'agreement among raft nodes before linearized reading'  (duration: 106.570965ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:57:37.268571Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.794583ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/tiller-clusterrolebinding\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:57:37.268614Z","caller":"traceutil/trace.go:171","msg":"trace[1245973959] range","detail":"{range_begin:/registry/clusterrolebindings/tiller-clusterrolebinding; range_end:; response_count:0; response_revision:461; }","duration":"102.849968ms","start":"2024-08-19T17:57:37.165755Z","end":"2024-08-19T17:57:37.268605Z","steps":["trace[1245973959] 'agreement among raft nodes before linearized reading'  (duration: 102.773798ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:57:37.268739Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.919406ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/yakd-dashboard/yakd-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:57:37.268768Z","caller":"traceutil/trace.go:171","msg":"trace[915306369] range","detail":"{range_begin:/registry/serviceaccounts/yakd-dashboard/yakd-dashboard; range_end:; response_count:0; response_revision:461; }","duration":"103.948723ms","start":"2024-08-19T17:57:37.164811Z","end":"2024-08-19T17:57:37.268760Z","steps":["trace[915306369] 'agreement among raft nodes before linearized reading'  (duration: 103.904279ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:57:37.268879Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.022192ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-08-19T17:57:37.268914Z","caller":"traceutil/trace.go:171","msg":"trace[1648820651] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:461; }","duration":"106.057458ms","start":"2024-08-19T17:57:37.162848Z","end":"2024-08-19T17:57:37.268905Z","steps":["trace[1648820651] 'agreement among raft nodes before linearized reading'  (duration: 106.001205ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:57:37.269032Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.941117ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-08-19T17:57:37.269063Z","caller":"traceutil/trace.go:171","msg":"trace[41264408] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:461; }","duration":"107.975286ms","start":"2024-08-19T17:57:37.161081Z","end":"2024-08-19T17:57:37.269056Z","steps":["trace[41264408] 'agreement among raft nodes before linearized reading'  (duration: 107.919777ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:57:39.161931Z","caller":"traceutil/trace.go:171","msg":"trace[1237943717] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"103.575554ms","start":"2024-08-19T17:57:39.058340Z","end":"2024-08-19T17:57:39.161915Z","steps":["trace[1237943717] 'process raft request'  (duration: 15.192953ms)","trace[1237943717] 'compare'  (duration: 87.833046ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T17:57:39.162087Z","caller":"traceutil/trace.go:171","msg":"trace[1152797025] transaction","detail":"{read_only:false; response_revision:611; number_of_response:1; }","duration":"103.362806ms","start":"2024-08-19T17:57:39.058716Z","end":"2024-08-19T17:57:39.162079Z","steps":["trace[1152797025] 'process raft request'  (duration: 102.774597ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:57:39.162256Z","caller":"traceutil/trace.go:171","msg":"trace[160442894] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"103.195904ms","start":"2024-08-19T17:57:39.059050Z","end":"2024-08-19T17:57:39.162245Z","steps":["trace[160442894] 'process raft request'  (duration: 102.482318ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:57:39.162376Z","caller":"traceutil/trace.go:171","msg":"trace[1696024948] linearizableReadLoop","detail":"{readStateIndex:626; appliedIndex:624; }","duration":"103.481477ms","start":"2024-08-19T17:57:39.058888Z","end":"2024-08-19T17:57:39.162369Z","steps":["trace[1696024948] 'read index received'  (duration: 14.505194ms)","trace[1696024948] 'applied index is now lower than readState.Index'  (duration: 88.974533ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T17:57:39.162514Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.611778ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/ingress-nginx\" ","response":"range_response_count:1 size:849"}
	{"level":"info","ts":"2024-08-19T17:57:39.162547Z","caller":"traceutil/trace.go:171","msg":"trace[718473624] range","detail":"{range_begin:/registry/namespaces/ingress-nginx; range_end:; response_count:1; response_revision:614; }","duration":"103.655227ms","start":"2024-08-19T17:57:39.058884Z","end":"2024-08-19T17:57:39.162539Z","steps":["trace[718473624] 'agreement among raft nodes before linearized reading'  (duration: 103.54972ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:57:39.163309Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.949158ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/job-controller\" ","response":"range_response_count:1 size:206"}
	{"level":"info","ts":"2024-08-19T17:57:39.163407Z","caller":"traceutil/trace.go:171","msg":"trace[876524858] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/job-controller; range_end:; response_count:1; response_revision:615; }","duration":"102.050918ms","start":"2024-08-19T17:57:39.061345Z","end":"2024-08-19T17:57:39.163396Z","steps":["trace[876524858] 'agreement among raft nodes before linearized reading'  (duration: 101.921515ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:02:57 up  1:45,  0 users,  load average: 0.21, 0.63, 0.35
	Linux addons-142951 5.15.0-1066-gcp #74~20.04.1-Ubuntu SMP Fri Jul 26 09:28:41 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf] <==
	E0819 18:01:50.577156       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 18:01:50.995066       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:01:50.995098       1 main.go:299] handling current node
	I0819 18:02:00.994361       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:02:00.994395       1 main.go:299] handling current node
	W0819 18:02:03.878210       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 18:02:03.878245       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 18:02:10.994525       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:02:10.994555       1 main.go:299] handling current node
	W0819 18:02:20.219927       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 18:02:20.219960       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 18:02:20.995047       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:02:20.995080       1 main.go:299] handling current node
	W0819 18:02:26.769212       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 18:02:26.769248       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 18:02:30.995025       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:02:30.995061       1 main.go:299] handling current node
	I0819 18:02:40.994018       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:02:40.994051       1 main.go:299] handling current node
	W0819 18:02:45.974738       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 18:02:45.974766       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 18:02:50.549541       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 18:02:50.549578       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 18:02:50.994583       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:02:50.994628       1 main.go:299] handling current node
	
	
	==> kube-apiserver [5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350] <==
	E0819 17:59:31.579693       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.83.110:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.83.110:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.83.110:443: connect: connection refused" logger="UnhandledError"
	I0819 17:59:31.612141       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0819 17:59:49.159408       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59696: use of closed network connection
	E0819 17:59:49.309088       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59720: use of closed network connection
	I0819 18:00:03.938448       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0819 18:00:04.960820       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0819 18:00:14.830048       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0819 18:00:16.001854       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.26:57418: read: connection reset by peer
	I0819 18:00:28.951604       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0819 18:00:29.167228       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.213.191"}
	I0819 18:00:31.505238       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.197.187"}
	I0819 18:00:49.865797       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 18:00:49.865842       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 18:00:49.878567       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 18:00:49.878725       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 18:00:49.881171       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 18:00:49.881287       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 18:00:49.963301       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 18:00:49.963422       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 18:00:49.979476       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 18:00:49.979515       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0819 18:00:50.881049       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0819 18:00:50.980577       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0819 18:00:50.986314       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0819 18:02:48.181287       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.105.25"}
	
	
	==> kube-controller-manager [7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664] <==
	W0819 18:01:37.441646       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:01:37.441683       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:01:56.114153       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:01:56.114196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:01:58.924146       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:01:58.924183       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:01:59.683101       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:01:59.683138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:02:23.499962       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:02:23.499998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:02:31.126824       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:02:31.126863       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:02:33.959623       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:02:33.959665       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:02:45.065292       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:02:45.065327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 18:02:47.986720       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="14.296372ms"
	I0819 18:02:47.990017       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="3.25607ms"
	I0819 18:02:47.990087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="32.939µs"
	I0819 18:02:47.993661       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="45.319µs"
	I0819 18:02:49.030205       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="5.876946ms"
	I0819 18:02:49.030302       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="57.24µs"
	I0819 18:02:49.800672       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0819 18:02:49.801983       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="5.357µs"
	I0819 18:02:49.804124       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	
	
	==> kube-proxy [da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97] <==
	I0819 17:57:36.363675       1 server_linux.go:66] "Using iptables proxy"
	I0819 17:57:37.276472       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0819 17:57:37.276538       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:57:37.560337       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0819 17:57:37.560468       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:57:37.673720       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:57:37.676266       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:57:37.676418       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:57:37.678156       1 config.go:197] "Starting service config controller"
	I0819 17:57:37.679639       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:57:37.678726       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:57:37.679671       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:57:37.679325       1 config.go:326] "Starting node config controller"
	I0819 17:57:37.679680       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:57:37.779937       1 shared_informer.go:320] Caches are synced for node config
	I0819 17:57:37.780009       1 shared_informer.go:320] Caches are synced for service config
	I0819 17:57:37.780043       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c] <==
	E0819 17:57:25.382431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0819 17:57:25.382447       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:57:25.382568       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 17:57:25.382594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:57:25.382621       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0819 17:57:25.382629       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 17:57:25.382646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0819 17:57:25.382647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:57:25.382662       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0819 17:57:25.382771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 17:57:25.382776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0819 17:57:25.382791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:57:25.382682       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 17:57:25.382816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:57:26.225317       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 17:57:26.225354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:57:26.239686       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 17:57:26.239716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:57:26.307124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 17:57:26.307173       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:57:26.320392       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 17:57:26.320429       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 17:57:26.433418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 17:57:26.433460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0819 17:57:28.880589       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 18:02:48 addons-142951 kubelet[1633]: I0819 18:02:48.098395    1633 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7pdp\" (UniqueName: \"kubernetes.io/projected/f58fdd45-ba67-47ec-8441-c8e8add58f28-kube-api-access-z7pdp\") pod \"hello-world-app-55bf9c44b4-pxt4b\" (UID: \"f58fdd45-ba67-47ec-8441-c8e8add58f28\") " pod="default/hello-world-app-55bf9c44b4-pxt4b"
	Aug 19 18:02:49 addons-142951 kubelet[1633]: I0819 18:02:49.015021    1633 scope.go:117] "RemoveContainer" containerID="13518526319de402a4cdff6b1fb1bc6bd90a44e13d20b8ecccc67e95c614f271"
	Aug 19 18:02:49 addons-142951 kubelet[1633]: I0819 18:02:49.029330    1633 scope.go:117] "RemoveContainer" containerID="13518526319de402a4cdff6b1fb1bc6bd90a44e13d20b8ecccc67e95c614f271"
	Aug 19 18:02:49 addons-142951 kubelet[1633]: E0819 18:02:49.029751    1633 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13518526319de402a4cdff6b1fb1bc6bd90a44e13d20b8ecccc67e95c614f271\": container with ID starting with 13518526319de402a4cdff6b1fb1bc6bd90a44e13d20b8ecccc67e95c614f271 not found: ID does not exist" containerID="13518526319de402a4cdff6b1fb1bc6bd90a44e13d20b8ecccc67e95c614f271"
	Aug 19 18:02:49 addons-142951 kubelet[1633]: I0819 18:02:49.029806    1633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13518526319de402a4cdff6b1fb1bc6bd90a44e13d20b8ecccc67e95c614f271"} err="failed to get container status \"13518526319de402a4cdff6b1fb1bc6bd90a44e13d20b8ecccc67e95c614f271\": rpc error: code = NotFound desc = could not find container \"13518526319de402a4cdff6b1fb1bc6bd90a44e13d20b8ecccc67e95c614f271\": container with ID starting with 13518526319de402a4cdff6b1fb1bc6bd90a44e13d20b8ecccc67e95c614f271 not found: ID does not exist"
	Aug 19 18:02:49 addons-142951 kubelet[1633]: I0819 18:02:49.103416    1633 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4bjk\" (UniqueName: \"kubernetes.io/projected/c80a5549-ce5e-4dcd-adca-60388c15eb01-kube-api-access-w4bjk\") pod \"c80a5549-ce5e-4dcd-adca-60388c15eb01\" (UID: \"c80a5549-ce5e-4dcd-adca-60388c15eb01\") "
	Aug 19 18:02:49 addons-142951 kubelet[1633]: I0819 18:02:49.105211    1633 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c80a5549-ce5e-4dcd-adca-60388c15eb01-kube-api-access-w4bjk" (OuterVolumeSpecName: "kube-api-access-w4bjk") pod "c80a5549-ce5e-4dcd-adca-60388c15eb01" (UID: "c80a5549-ce5e-4dcd-adca-60388c15eb01"). InnerVolumeSpecName "kube-api-access-w4bjk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 18:02:49 addons-142951 kubelet[1633]: I0819 18:02:49.204423    1633 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-w4bjk\" (UniqueName: \"kubernetes.io/projected/c80a5549-ce5e-4dcd-adca-60388c15eb01-kube-api-access-w4bjk\") on node \"addons-142951\" DevicePath \"\""
	Aug 19 18:02:49 addons-142951 kubelet[1633]: I0819 18:02:49.324118    1633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-pxt4b" podStartSLOduration=1.6975977960000002 podStartE2EDuration="2.324101615s" podCreationTimestamp="2024-08-19 18:02:47 +0000 UTC" firstStartedPulling="2024-08-19 18:02:48.319831541 +0000 UTC m=+320.808917274" lastFinishedPulling="2024-08-19 18:02:48.946335369 +0000 UTC m=+321.435421093" observedRunningTime="2024-08-19 18:02:49.023593414 +0000 UTC m=+321.512679165" watchObservedRunningTime="2024-08-19 18:02:49.324101615 +0000 UTC m=+321.813187357"
	Aug 19 18:02:49 addons-142951 kubelet[1633]: I0819 18:02:49.682221    1633 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c80a5549-ce5e-4dcd-adca-60388c15eb01" path="/var/lib/kubelet/pods/c80a5549-ce5e-4dcd-adca-60388c15eb01/volumes"
	Aug 19 18:02:51 addons-142951 kubelet[1633]: I0819 18:02:51.681565    1633 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25c2b2d8-199b-403d-a588-8c2da116672b" path="/var/lib/kubelet/pods/25c2b2d8-199b-403d-a588-8c2da116672b/volumes"
	Aug 19 18:02:51 addons-142951 kubelet[1633]: I0819 18:02:51.681953    1633 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4989222c-a843-4405-bfca-ab802fd728a5" path="/var/lib/kubelet/pods/4989222c-a843-4405-bfca-ab802fd728a5/volumes"
	Aug 19 18:02:53 addons-142951 kubelet[1633]: I0819 18:02:53.024861    1633 scope.go:117] "RemoveContainer" containerID="e53684b8506edaa8e314693fe8a0fe0057187e4b6454777c04ae6de283f278e2"
	Aug 19 18:02:53 addons-142951 kubelet[1633]: I0819 18:02:53.037593    1633 scope.go:117] "RemoveContainer" containerID="e53684b8506edaa8e314693fe8a0fe0057187e4b6454777c04ae6de283f278e2"
	Aug 19 18:02:53 addons-142951 kubelet[1633]: E0819 18:02:53.037903    1633 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e53684b8506edaa8e314693fe8a0fe0057187e4b6454777c04ae6de283f278e2\": container with ID starting with e53684b8506edaa8e314693fe8a0fe0057187e4b6454777c04ae6de283f278e2 not found: ID does not exist" containerID="e53684b8506edaa8e314693fe8a0fe0057187e4b6454777c04ae6de283f278e2"
	Aug 19 18:02:53 addons-142951 kubelet[1633]: I0819 18:02:53.037941    1633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e53684b8506edaa8e314693fe8a0fe0057187e4b6454777c04ae6de283f278e2"} err="failed to get container status \"e53684b8506edaa8e314693fe8a0fe0057187e4b6454777c04ae6de283f278e2\": rpc error: code = NotFound desc = could not find container \"e53684b8506edaa8e314693fe8a0fe0057187e4b6454777c04ae6de283f278e2\": container with ID starting with e53684b8506edaa8e314693fe8a0fe0057187e4b6454777c04ae6de283f278e2 not found: ID does not exist"
	Aug 19 18:02:53 addons-142951 kubelet[1633]: I0819 18:02:53.126895    1633 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/898edbc7-d5f2-4485-b388-f62671567457-webhook-cert\") pod \"898edbc7-d5f2-4485-b388-f62671567457\" (UID: \"898edbc7-d5f2-4485-b388-f62671567457\") "
	Aug 19 18:02:53 addons-142951 kubelet[1633]: I0819 18:02:53.126941    1633 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjfs7\" (UniqueName: \"kubernetes.io/projected/898edbc7-d5f2-4485-b388-f62671567457-kube-api-access-hjfs7\") pod \"898edbc7-d5f2-4485-b388-f62671567457\" (UID: \"898edbc7-d5f2-4485-b388-f62671567457\") "
	Aug 19 18:02:53 addons-142951 kubelet[1633]: I0819 18:02:53.128587    1633 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/898edbc7-d5f2-4485-b388-f62671567457-kube-api-access-hjfs7" (OuterVolumeSpecName: "kube-api-access-hjfs7") pod "898edbc7-d5f2-4485-b388-f62671567457" (UID: "898edbc7-d5f2-4485-b388-f62671567457"). InnerVolumeSpecName "kube-api-access-hjfs7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 19 18:02:53 addons-142951 kubelet[1633]: I0819 18:02:53.128691    1633 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/898edbc7-d5f2-4485-b388-f62671567457-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "898edbc7-d5f2-4485-b388-f62671567457" (UID: "898edbc7-d5f2-4485-b388-f62671567457"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 19 18:02:53 addons-142951 kubelet[1633]: I0819 18:02:53.227579    1633 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hjfs7\" (UniqueName: \"kubernetes.io/projected/898edbc7-d5f2-4485-b388-f62671567457-kube-api-access-hjfs7\") on node \"addons-142951\" DevicePath \"\""
	Aug 19 18:02:53 addons-142951 kubelet[1633]: I0819 18:02:53.227613    1633 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/898edbc7-d5f2-4485-b388-f62671567457-webhook-cert\") on node \"addons-142951\" DevicePath \"\""
	Aug 19 18:02:53 addons-142951 kubelet[1633]: I0819 18:02:53.682254    1633 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="898edbc7-d5f2-4485-b388-f62671567457" path="/var/lib/kubelet/pods/898edbc7-d5f2-4485-b388-f62671567457/volumes"
	Aug 19 18:02:57 addons-142951 kubelet[1633]: E0819 18:02:57.729647    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090577729449973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:02:57 addons-142951 kubelet[1633]: E0819 18:02:57.729678    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090577729449973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [41cc2fea90c47a7827759dee3094c7a22d3951da0957eac497b9ee9cfdf70ac6] <==
	I0819 17:57:52.064040       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 17:57:52.070925       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 17:57:52.070979       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 17:57:52.078052       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 17:57:52.078257       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-142951_c559a01b-a10e-4b95-88f0-1d537cbdbbf2!
	I0819 17:57:52.078525       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe4283e4-2495-42de-8646-1972a4e1b497", APIVersion:"v1", ResourceVersion:"910", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-142951_c559a01b-a10e-4b95-88f0-1d537cbdbbf2 became leader
	I0819 17:57:52.179276       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-142951_c559a01b-a10e-4b95-88f0-1d537cbdbbf2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-142951 -n addons-142951
helpers_test.go:261: (dbg) Run:  kubectl --context addons-142951 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (149.97s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (296.97s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.845621ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-hggkq" [0dca4d1b-5042-4c63-b3e2-04f12c5f19a8] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002523199s
addons_test.go:417: (dbg) Run:  kubectl --context addons-142951 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-142951 top pods -n kube-system: exit status 1 (63.228511ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fc8vt, age: 2m29.561973003s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-142951 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-142951 top pods -n kube-system: exit status 1 (63.695189ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fc8vt, age: 2m31.392404801s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-142951 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-142951 top pods -n kube-system: exit status 1 (70.654128ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fc8vt, age: 2m34.468765447s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-142951 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-142951 top pods -n kube-system: exit status 1 (62.640084ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fc8vt, age: 2m44.172646851s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-142951 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-142951 top pods -n kube-system: exit status 1 (68.924877ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fc8vt, age: 2m54.735454797s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-142951 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-142951 top pods -n kube-system: exit status 1 (64.852165ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fc8vt, age: 3m3.923047196s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-142951 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-142951 top pods -n kube-system: exit status 1 (60.124605ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fc8vt, age: 3m19.356248383s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-142951 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-142951 top pods -n kube-system: exit status 1 (61.207738ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fc8vt, age: 3m49.663401643s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-142951 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-142951 top pods -n kube-system: exit status 1 (60.339359ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fc8vt, age: 4m54.57876933s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-142951 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-142951 top pods -n kube-system: exit status 1 (60.122998ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fc8vt, age: 6m6.691759008s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-142951 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-142951 top pods -n kube-system: exit status 1 (60.83439ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-fc8vt, age: 7m19.206785293s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-142951 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-142951
helpers_test.go:235: (dbg) docker inspect addons-142951:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "010445039d67d19f3294dd18609a0145ef64742dff1c5c12c0255ddf925bb383",
	        "Created": "2024-08-19T17:57:11.410180908Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33038,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T17:57:11.531105098Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:197224e1b90979b98de246567852a03b60e3aa31dcd0de02a456282118daeb84",
	        "ResolvConfPath": "/var/lib/docker/containers/010445039d67d19f3294dd18609a0145ef64742dff1c5c12c0255ddf925bb383/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/010445039d67d19f3294dd18609a0145ef64742dff1c5c12c0255ddf925bb383/hostname",
	        "HostsPath": "/var/lib/docker/containers/010445039d67d19f3294dd18609a0145ef64742dff1c5c12c0255ddf925bb383/hosts",
	        "LogPath": "/var/lib/docker/containers/010445039d67d19f3294dd18609a0145ef64742dff1c5c12c0255ddf925bb383/010445039d67d19f3294dd18609a0145ef64742dff1c5c12c0255ddf925bb383-json.log",
	        "Name": "/addons-142951",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-142951:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-142951",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bdabc06652087fcf79fb8e7008c3c883cfb0adc84e9a9f46231b4a492ec7f0b3-init/diff:/var/lib/docker/overlay2/0c2c9fdec01bef3a098fb8513a31b324e686eebb183f0aaad2be170703b9d191/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bdabc06652087fcf79fb8e7008c3c883cfb0adc84e9a9f46231b4a492ec7f0b3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bdabc06652087fcf79fb8e7008c3c883cfb0adc84e9a9f46231b4a492ec7f0b3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bdabc06652087fcf79fb8e7008c3c883cfb0adc84e9a9f46231b4a492ec7f0b3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-142951",
	                "Source": "/var/lib/docker/volumes/addons-142951/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-142951",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-142951",
	                "name.minikube.sigs.k8s.io": "addons-142951",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ce2a820d1fe646445374e09740096c8a15f3cd8ce78c5388c2cd41d7746ff653",
	            "SandboxKey": "/var/run/docker/netns/ce2a820d1fe6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-142951": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "26871bf810f1f705018de8bb3fd749522c8877a8a4a89af41f8045bb058152ac",
	                    "EndpointID": "535bad70c7ea5053a6056c83f3e2e7bb077f3683164fd3bf0359ff7c672ae775",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-142951",
	                        "010445039d67"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-142951 -n addons-142951
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-142951 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-142951 logs -n 25: (1.031290685s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-314754                                                                   | download-docker-314754 | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-755146   | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC |                     |
	|         | binary-mirror-755146                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44393                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-755146                                                                     | binary-mirror-755146   | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	| addons  | disable dashboard -p                                                                        | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC |                     |
	|         | addons-142951                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC |                     |
	|         | addons-142951                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-142951 --wait=true                                                                | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:59 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-142951 addons disable                                                                | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 17:59 UTC | 19 Aug 24 17:59 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | addons-142951                                                                               |                        |         |         |                     |                     |
	| ip      | addons-142951 ip                                                                            | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	| addons  | addons-142951 addons disable                                                                | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-142951 addons disable                                                                | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-142951 ssh cat                                                                       | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | /opt/local-path-provisioner/pvc-c78e1662-15f1-40c8-8ca4-6b6d5b18666a_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-142951 addons disable                                                                | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-142951 addons disable                                                                | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | -p addons-142951                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | addons-142951                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | -p addons-142951                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-142951 ssh curl -s                                                                   | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-142951 addons disable                                                                | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-142951 addons                                                                        | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-142951 addons                                                                        | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:00 UTC | 19 Aug 24 18:00 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-142951 ip                                                                            | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:02 UTC | 19 Aug 24 18:02 UTC |
	| addons  | addons-142951 addons disable                                                                | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:02 UTC | 19 Aug 24 18:02 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-142951 addons disable                                                                | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:02 UTC | 19 Aug 24 18:02 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-142951 addons                                                                        | addons-142951          | jenkins | v1.33.1 | 19 Aug 24 18:04 UTC | 19 Aug 24 18:04 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:56:47
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:56:47.724282   32277 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:56:47.724500   32277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:56:47.724507   32277 out.go:358] Setting ErrFile to fd 2...
	I0819 17:56:47.724512   32277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:56:47.724666   32277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-24160/.minikube/bin
	I0819 17:56:47.725274   32277 out.go:352] Setting JSON to false
	I0819 17:56:47.726088   32277 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5958,"bootTime":1724084250,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 17:56:47.726138   32277 start.go:139] virtualization: kvm guest
	I0819 17:56:47.728120   32277 out.go:177] * [addons-142951] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 17:56:47.729252   32277 notify.go:220] Checking for updates...
	I0819 17:56:47.729261   32277 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 17:56:47.730359   32277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:56:47.731562   32277 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-24160/kubeconfig
	I0819 17:56:47.732699   32277 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-24160/.minikube
	I0819 17:56:47.733695   32277 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 17:56:47.734711   32277 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 17:56:47.735820   32277 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:56:47.755762   32277 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 17:56:47.755860   32277 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:56:47.801221   32277 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 17:56:47.792853029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 17:56:47.801319   32277 docker.go:307] overlay module found
	I0819 17:56:47.803138   32277 out.go:177] * Using the docker driver based on user configuration
	I0819 17:56:47.804344   32277 start.go:297] selected driver: docker
	I0819 17:56:47.804360   32277 start.go:901] validating driver "docker" against <nil>
	I0819 17:56:47.804370   32277 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 17:56:47.805056   32277 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:56:47.847937   32277 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 17:56:47.840245494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 17:56:47.848077   32277 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 17:56:47.848271   32277 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:56:47.849900   32277 out.go:177] * Using Docker driver with root privileges
	I0819 17:56:47.851222   32277 cni.go:84] Creating CNI manager for ""
	I0819 17:56:47.851236   32277 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 17:56:47.851245   32277 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 17:56:47.851290   32277 start.go:340] cluster config:
	{Name:addons-142951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-142951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:56:47.852500   32277 out.go:177] * Starting "addons-142951" primary control-plane node in "addons-142951" cluster
	I0819 17:56:47.853652   32277 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 17:56:47.854887   32277 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 17:56:47.855882   32277 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:56:47.855905   32277 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-24160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 17:56:47.855905   32277 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 17:56:47.855913   32277 cache.go:56] Caching tarball of preloaded images
	I0819 17:56:47.855974   32277 preload.go:172] Found /home/jenkins/minikube-integration/19468-24160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 17:56:47.855985   32277 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 17:56:47.856266   32277 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/config.json ...
	I0819 17:56:47.856286   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/config.json: {Name:mke776199edf729a366eaa93bf40a10a81fb3522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:56:47.871758   32277 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 17:56:47.871861   32277 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 17:56:47.871877   32277 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 17:56:47.871881   32277 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 17:56:47.871891   32277 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 17:56:47.871897   32277 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 17:56:59.357589   32277 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 17:56:59.357631   32277 cache.go:194] Successfully downloaded all kic artifacts
	I0819 17:56:59.357674   32277 start.go:360] acquireMachinesLock for addons-142951: {Name:mke80a9d847714c8b2e4c449106f243d13aae04d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:56:59.357780   32277 start.go:364] duration metric: took 82.307µs to acquireMachinesLock for "addons-142951"
	I0819 17:56:59.357806   32277 start.go:93] Provisioning new machine with config: &{Name:addons-142951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-142951 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:56:59.357905   32277 start.go:125] createHost starting for "" (driver="docker")
	I0819 17:56:59.359740   32277 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0819 17:56:59.359962   32277 start.go:159] libmachine.API.Create for "addons-142951" (driver="docker")
	I0819 17:56:59.359991   32277 client.go:168] LocalClient.Create starting
	I0819 17:56:59.360104   32277 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem
	I0819 17:56:59.620701   32277 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/cert.pem
	I0819 17:56:59.739545   32277 cli_runner.go:164] Run: docker network inspect addons-142951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0819 17:56:59.754767   32277 cli_runner.go:211] docker network inspect addons-142951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0819 17:56:59.754830   32277 network_create.go:284] running [docker network inspect addons-142951] to gather additional debugging logs...
	I0819 17:56:59.754850   32277 cli_runner.go:164] Run: docker network inspect addons-142951
	W0819 17:56:59.769322   32277 cli_runner.go:211] docker network inspect addons-142951 returned with exit code 1
	I0819 17:56:59.769346   32277 network_create.go:287] error running [docker network inspect addons-142951]: docker network inspect addons-142951: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-142951 not found
	I0819 17:56:59.769361   32277 network_create.go:289] output of [docker network inspect addons-142951]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-142951 not found
	
	** /stderr **
	I0819 17:56:59.769474   32277 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 17:56:59.784393   32277 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00192ed30}
	I0819 17:56:59.784434   32277 network_create.go:124] attempt to create docker network addons-142951 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0819 17:56:59.784468   32277 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-142951 addons-142951
	I0819 17:56:59.838175   32277 network_create.go:108] docker network addons-142951 192.168.49.0/24 created
	I0819 17:56:59.838210   32277 kic.go:121] calculated static IP "192.168.49.2" for the "addons-142951" container
	I0819 17:56:59.838280   32277 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0819 17:56:59.852260   32277 cli_runner.go:164] Run: docker volume create addons-142951 --label name.minikube.sigs.k8s.io=addons-142951 --label created_by.minikube.sigs.k8s.io=true
	I0819 17:56:59.867964   32277 oci.go:103] Successfully created a docker volume addons-142951
	I0819 17:56:59.868023   32277 cli_runner.go:164] Run: docker run --rm --name addons-142951-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-142951 --entrypoint /usr/bin/test -v addons-142951:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
	I0819 17:57:07.040061   32277 cli_runner.go:217] Completed: docker run --rm --name addons-142951-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-142951 --entrypoint /usr/bin/test -v addons-142951:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib: (7.17200162s)
	I0819 17:57:07.040088   32277 oci.go:107] Successfully prepared a docker volume addons-142951
	I0819 17:57:07.040104   32277 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:57:07.040123   32277 kic.go:194] Starting extracting preloaded images to volume ...
	I0819 17:57:07.040169   32277 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19468-24160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-142951:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
	I0819 17:57:11.349443   32277 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19468-24160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-142951:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir: (4.309219526s)
	I0819 17:57:11.349493   32277 kic.go:203] duration metric: took 4.309367412s to extract preloaded images to volume ...
	W0819 17:57:11.349636   32277 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0819 17:57:11.349748   32277 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0819 17:57:11.396521   32277 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-142951 --name addons-142951 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-142951 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-142951 --network addons-142951 --ip 192.168.49.2 --volume addons-142951:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d
	I0819 17:57:11.691494   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Running}}
	I0819 17:57:11.708615   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:11.725662   32277 cli_runner.go:164] Run: docker exec addons-142951 stat /var/lib/dpkg/alternatives/iptables
	I0819 17:57:11.765459   32277 oci.go:144] the created container "addons-142951" has a running status.
	I0819 17:57:11.765487   32277 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa...
	I0819 17:57:11.912954   32277 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0819 17:57:11.933490   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:11.949448   32277 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0819 17:57:11.949474   32277 kic_runner.go:114] Args: [docker exec --privileged addons-142951 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0819 17:57:11.988957   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:12.014280   32277 machine.go:93] provisionDockerMachine start ...
	I0819 17:57:12.014360   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:12.030653   32277 main.go:141] libmachine: Using SSH client type: native
	I0819 17:57:12.030839   32277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 17:57:12.030851   32277 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 17:57:12.031443   32277 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35324->127.0.0.1:32768: read: connection reset by peer
	I0819 17:57:15.148129   32277 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-142951
	
	I0819 17:57:15.148158   32277 ubuntu.go:169] provisioning hostname "addons-142951"
	I0819 17:57:15.148205   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:15.163942   32277 main.go:141] libmachine: Using SSH client type: native
	I0819 17:57:15.164100   32277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 17:57:15.164113   32277 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-142951 && echo "addons-142951" | sudo tee /etc/hostname
	I0819 17:57:15.286780   32277 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-142951
	
	I0819 17:57:15.286850   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:15.303846   32277 main.go:141] libmachine: Using SSH client type: native
	I0819 17:57:15.304033   32277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 17:57:15.304051   32277 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-142951' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-142951/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-142951' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 17:57:15.420851   32277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:57:15.420877   32277 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19468-24160/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-24160/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-24160/.minikube}
	I0819 17:57:15.420909   32277 ubuntu.go:177] setting up certificates
	I0819 17:57:15.420921   32277 provision.go:84] configureAuth start
	I0819 17:57:15.420975   32277 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-142951
	I0819 17:57:15.437050   32277 provision.go:143] copyHostCerts
	I0819 17:57:15.437111   32277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-24160/.minikube/key.pem (1679 bytes)
	I0819 17:57:15.437259   32277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-24160/.minikube/ca.pem (1078 bytes)
	I0819 17:57:15.437331   32277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-24160/.minikube/cert.pem (1123 bytes)
	I0819 17:57:15.437396   32277 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca-key.pem org=jenkins.addons-142951 san=[127.0.0.1 192.168.49.2 addons-142951 localhost minikube]
	I0819 17:57:15.625405   32277 provision.go:177] copyRemoteCerts
	I0819 17:57:15.625457   32277 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 17:57:15.625490   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:15.641251   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:15.729170   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 17:57:15.749007   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 17:57:15.768646   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 17:57:15.788220   32277 provision.go:87] duration metric: took 367.281976ms to configureAuth
	I0819 17:57:15.788247   32277 ubuntu.go:193] setting minikube options for container-runtime
	I0819 17:57:15.788394   32277 config.go:182] Loaded profile config "addons-142951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:57:15.788473   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:15.806070   32277 main.go:141] libmachine: Using SSH client type: native
	I0819 17:57:15.806237   32277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0819 17:57:15.806252   32277 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 17:57:16.003354   32277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 17:57:16.003378   32277 machine.go:96] duration metric: took 3.989079175s to provisionDockerMachine
	I0819 17:57:16.003387   32277 client.go:171] duration metric: took 16.643386931s to LocalClient.Create
	I0819 17:57:16.003405   32277 start.go:167] duration metric: took 16.643447497s to libmachine.API.Create "addons-142951"
	I0819 17:57:16.003412   32277 start.go:293] postStartSetup for "addons-142951" (driver="docker")
	I0819 17:57:16.003420   32277 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 17:57:16.003466   32277 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 17:57:16.003496   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:16.018955   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:16.105640   32277 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 17:57:16.108645   32277 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 17:57:16.108668   32277 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 17:57:16.108676   32277 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 17:57:16.108688   32277 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 17:57:16.108699   32277 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-24160/.minikube/addons for local assets ...
	I0819 17:57:16.108759   32277 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-24160/.minikube/files for local assets ...
	I0819 17:57:16.108782   32277 start.go:296] duration metric: took 105.365839ms for postStartSetup
	I0819 17:57:16.109056   32277 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-142951
	I0819 17:57:16.125539   32277 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/config.json ...
	I0819 17:57:16.125849   32277 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:57:16.125917   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:16.142158   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:16.225618   32277 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 17:57:16.229551   32277 start.go:128] duration metric: took 16.871632525s to createHost
	I0819 17:57:16.229572   32277 start.go:83] releasing machines lock for "addons-142951", held for 16.871779426s
	I0819 17:57:16.229632   32277 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-142951
	I0819 17:57:16.245668   32277 ssh_runner.go:195] Run: cat /version.json
	I0819 17:57:16.245704   32277 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 17:57:16.245713   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:16.245756   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:16.262132   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:16.262808   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:16.344320   32277 ssh_runner.go:195] Run: systemctl --version
	I0819 17:57:16.348183   32277 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 17:57:16.483148   32277 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 17:57:16.487119   32277 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:57:16.503330   32277 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0819 17:57:16.503429   32277 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:57:16.528315   32277 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0819 17:57:16.528338   32277 start.go:495] detecting cgroup driver to use...
	I0819 17:57:16.528366   32277 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 17:57:16.528398   32277 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 17:57:16.540890   32277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 17:57:16.549859   32277 docker.go:217] disabling cri-docker service (if available) ...
	I0819 17:57:16.549906   32277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 17:57:16.561072   32277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 17:57:16.572895   32277 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 17:57:16.648347   32277 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 17:57:16.721318   32277 docker.go:233] disabling docker service ...
	I0819 17:57:16.721419   32277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 17:57:16.737371   32277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 17:57:16.746775   32277 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 17:57:16.819702   32277 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 17:57:16.901287   32277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 17:57:16.911166   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 17:57:16.924587   32277 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 17:57:16.924631   32277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:57:16.932373   32277 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 17:57:16.932413   32277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:57:16.940905   32277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:57:16.949000   32277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:57:16.957081   32277 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 17:57:16.964475   32277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:57:16.972173   32277 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:57:16.984905   32277 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 17:57:16.992625   32277 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 17:57:16.999530   32277 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 17:57:17.006496   32277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:57:17.075749   32277 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 17:57:17.177579   32277 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 17:57:17.177653   32277 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 17:57:17.180650   32277 start.go:563] Will wait 60s for crictl version
	I0819 17:57:17.180699   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:57:17.183523   32277 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 17:57:17.214182   32277 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0819 17:57:17.214266   32277 ssh_runner.go:195] Run: crio --version
	I0819 17:57:17.246180   32277 ssh_runner.go:195] Run: crio --version
	I0819 17:57:17.278064   32277 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0819 17:57:17.279202   32277 cli_runner.go:164] Run: docker network inspect addons-142951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 17:57:17.294524   32277 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 17:57:17.297615   32277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:57:17.306968   32277 kubeadm.go:883] updating cluster {Name:addons-142951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-142951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 17:57:17.307092   32277 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 17:57:17.307132   32277 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:57:17.367715   32277 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:57:17.367733   32277 crio.go:433] Images already preloaded, skipping extraction
	I0819 17:57:17.367771   32277 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:57:17.396662   32277 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 17:57:17.396680   32277 cache_images.go:84] Images are preloaded, skipping loading
	I0819 17:57:17.396687   32277 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0819 17:57:17.396767   32277 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-142951 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-142951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 17:57:17.396823   32277 ssh_runner.go:195] Run: crio config
	I0819 17:57:17.434906   32277 cni.go:84] Creating CNI manager for ""
	I0819 17:57:17.434923   32277 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 17:57:17.434931   32277 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 17:57:17.434950   32277 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-142951 NodeName:addons-142951 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 17:57:17.435086   32277 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-142951"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 17:57:17.435141   32277 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 17:57:17.442656   32277 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 17:57:17.442718   32277 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 17:57:17.450771   32277 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0819 17:57:17.465967   32277 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 17:57:17.480566   32277 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0819 17:57:17.494846   32277 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0819 17:57:17.497697   32277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:57:17.506357   32277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:57:17.584274   32277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:57:17.595452   32277 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951 for IP: 192.168.49.2
	I0819 17:57:17.595472   32277 certs.go:194] generating shared ca certs ...
	I0819 17:57:17.595492   32277 certs.go:226] acquiring lock for ca certs: {Name:mk29d2f357e66b5ff77917021423cbbf2fc2a40f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:17.595622   32277 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-24160/.minikube/ca.key
	I0819 17:57:17.998787   32277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt ...
	I0819 17:57:17.998813   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt: {Name:mk892498a9d94f742583f9e4d4534f0a394cf1ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:17.998974   32277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-24160/.minikube/ca.key ...
	I0819 17:57:17.998985   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/ca.key: {Name:mk6308b41ac8dce85ac9fe41456952a216fd065b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:17.999056   32277 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.key
	I0819 17:57:18.304422   32277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.crt ...
	I0819 17:57:18.304455   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.crt: {Name:mk843459abb4914769f87c4f7b640341b16ad5d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:18.304622   32277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.key ...
	I0819 17:57:18.304633   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.key: {Name:mk01a481fe536041de09c79442a3c6ea5f83cc0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:18.304710   32277 certs.go:256] generating profile certs ...
	I0819 17:57:18.304758   32277 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.key
	I0819 17:57:18.304771   32277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt with IP's: []
	I0819 17:57:18.380640   32277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt ...
	I0819 17:57:18.380666   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: {Name:mk2d4a110123a4b16849e02b6ddee4f54ccaaace Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:18.380832   32277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.key ...
	I0819 17:57:18.380849   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.key: {Name:mkfc2b488b758c56830434db2c6360a7aab30347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:18.380950   32277 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.key.c28c9c86
	I0819 17:57:18.380971   32277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.crt.c28c9c86 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0819 17:57:18.445145   32277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.crt.c28c9c86 ...
	I0819 17:57:18.445169   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.crt.c28c9c86: {Name:mk7e9e5e75586a999b0884653ea24b1296f4ae1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:18.445335   32277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.key.c28c9c86 ...
	I0819 17:57:18.445352   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.key.c28c9c86: {Name:mkfd5dd2c92c97c19ab80158f609de57f9851b84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:18.445445   32277 certs.go:381] copying /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.crt.c28c9c86 -> /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.crt
	I0819 17:57:18.445516   32277 certs.go:385] copying /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.key.c28c9c86 -> /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.key
	I0819 17:57:18.445561   32277 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/proxy-client.key
	I0819 17:57:18.445578   32277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/proxy-client.crt with IP's: []
	I0819 17:57:18.743386   32277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/proxy-client.crt ...
	I0819 17:57:18.743412   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/proxy-client.crt: {Name:mkc0dc7b756d678b1a511ffdb986f487ea23bfa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:18.743592   32277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/proxy-client.key ...
	I0819 17:57:18.743605   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/proxy-client.key: {Name:mk2d4e405e0e43210449f6a3c33edcb8e99c8fea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:18.743792   32277 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 17:57:18.743827   32277 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem (1078 bytes)
	I0819 17:57:18.743850   32277 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/cert.pem (1123 bytes)
	I0819 17:57:18.743873   32277 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/key.pem (1679 bytes)
	I0819 17:57:18.744438   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 17:57:18.765673   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 17:57:18.785444   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 17:57:18.805040   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 17:57:18.825183   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 17:57:18.844572   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 17:57:18.864304   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 17:57:18.884116   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 17:57:18.904258   32277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 17:57:18.923594   32277 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 17:57:18.937820   32277 ssh_runner.go:195] Run: openssl version
	I0819 17:57:18.942475   32277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 17:57:18.950123   32277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:57:18.952939   32277 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:57:18.952973   32277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:57:18.958751   32277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 17:57:18.966040   32277 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:57:18.968577   32277 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 17:57:18.968616   32277 kubeadm.go:392] StartCluster: {Name:addons-142951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-142951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:57:18.968688   32277 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 17:57:18.968732   32277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 17:57:18.998867   32277 cri.go:89] found id: ""
	I0819 17:57:18.998916   32277 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 17:57:19.006203   32277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 17:57:19.013397   32277 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0819 17:57:19.013436   32277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 17:57:19.020348   32277 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 17:57:19.020366   32277 kubeadm.go:157] found existing configuration files:
	
	I0819 17:57:19.020398   32277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 17:57:19.027212   32277 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 17:57:19.027250   32277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 17:57:19.033818   32277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 17:57:19.040784   32277 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 17:57:19.040821   32277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 17:57:19.047589   32277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 17:57:19.054475   32277 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 17:57:19.054508   32277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 17:57:19.061216   32277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 17:57:19.068114   32277 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 17:57:19.068155   32277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 17:57:19.074833   32277 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0819 17:57:19.107947   32277 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 17:57:19.108017   32277 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 17:57:19.125993   32277 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0819 17:57:19.126104   32277 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1066-gcp
	I0819 17:57:19.126167   32277 kubeadm.go:310] OS: Linux
	I0819 17:57:19.126221   32277 kubeadm.go:310] CGROUPS_CPU: enabled
	I0819 17:57:19.126277   32277 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0819 17:57:19.126358   32277 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0819 17:57:19.126449   32277 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0819 17:57:19.126537   32277 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0819 17:57:19.126631   32277 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0819 17:57:19.126705   32277 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0819 17:57:19.126788   32277 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0819 17:57:19.126864   32277 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0819 17:57:19.176788   32277 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 17:57:19.176929   32277 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 17:57:19.177047   32277 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 17:57:19.182591   32277 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 17:57:19.184833   32277 out.go:235]   - Generating certificates and keys ...
	I0819 17:57:19.184936   32277 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 17:57:19.185018   32277 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 17:57:19.316059   32277 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 17:57:19.497248   32277 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 17:57:19.832276   32277 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 17:57:19.998862   32277 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 17:57:20.232100   32277 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 17:57:20.232258   32277 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-142951 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 17:57:20.315566   32277 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 17:57:20.315742   32277 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-142951 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 17:57:20.449530   32277 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 17:57:20.685085   32277 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 17:57:20.903096   32277 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 17:57:20.903191   32277 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 17:57:21.046144   32277 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 17:57:21.210289   32277 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 17:57:21.530383   32277 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 17:57:21.681941   32277 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 17:57:21.830596   32277 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 17:57:21.830943   32277 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 17:57:21.833285   32277 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 17:57:21.835335   32277 out.go:235]   - Booting up control plane ...
	I0819 17:57:21.835429   32277 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 17:57:21.835546   32277 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 17:57:21.835640   32277 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 17:57:21.843375   32277 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 17:57:21.848501   32277 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 17:57:21.848557   32277 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 17:57:21.922048   32277 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 17:57:21.922154   32277 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 17:57:22.423256   32277 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.288249ms
	I0819 17:57:22.423380   32277 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 17:57:26.924062   32277 kubeadm.go:310] [api-check] The API server is healthy after 4.500805691s
	I0819 17:57:26.933891   32277 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 17:57:26.941687   32277 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 17:57:26.957299   32277 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 17:57:26.957539   32277 kubeadm.go:310] [mark-control-plane] Marking the node addons-142951 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 17:57:26.964077   32277 kubeadm.go:310] [bootstrap-token] Using token: azxnvb.3v27aiuj1vv955cj
	I0819 17:57:26.966089   32277 out.go:235]   - Configuring RBAC rules ...
	I0819 17:57:26.966210   32277 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 17:57:26.968296   32277 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 17:57:26.973244   32277 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 17:57:26.975390   32277 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 17:57:26.977423   32277 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 17:57:26.980458   32277 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 17:57:27.329845   32277 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 17:57:27.742834   32277 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 17:57:28.329651   32277 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 17:57:28.330416   32277 kubeadm.go:310] 
	I0819 17:57:28.330505   32277 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 17:57:28.330515   32277 kubeadm.go:310] 
	I0819 17:57:28.330619   32277 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 17:57:28.330629   32277 kubeadm.go:310] 
	I0819 17:57:28.330681   32277 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 17:57:28.330776   32277 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 17:57:28.330856   32277 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 17:57:28.330866   32277 kubeadm.go:310] 
	I0819 17:57:28.330952   32277 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 17:57:28.330967   32277 kubeadm.go:310] 
	I0819 17:57:28.331040   32277 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 17:57:28.331051   32277 kubeadm.go:310] 
	I0819 17:57:28.331128   32277 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 17:57:28.331237   32277 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 17:57:28.331326   32277 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 17:57:28.331336   32277 kubeadm.go:310] 
	I0819 17:57:28.331458   32277 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 17:57:28.331561   32277 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 17:57:28.331573   32277 kubeadm.go:310] 
	I0819 17:57:28.331684   32277 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token azxnvb.3v27aiuj1vv955cj \
	I0819 17:57:28.331837   32277 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:59c63718fbc86a78511e804b1caaa3c322b35e7a3de8f3eb39f0bfe29aa00431 \
	I0819 17:57:28.331883   32277 kubeadm.go:310] 	--control-plane 
	I0819 17:57:28.331893   32277 kubeadm.go:310] 
	I0819 17:57:28.332006   32277 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 17:57:28.332016   32277 kubeadm.go:310] 
	I0819 17:57:28.332133   32277 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token azxnvb.3v27aiuj1vv955cj \
	I0819 17:57:28.332281   32277 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:59c63718fbc86a78511e804b1caaa3c322b35e7a3de8f3eb39f0bfe29aa00431 
	I0819 17:57:28.334006   32277 kubeadm.go:310] W0819 17:57:19.105666    1292 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 17:57:28.334257   32277 kubeadm.go:310] W0819 17:57:19.106240    1292 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 17:57:28.334443   32277 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1066-gcp\n", err: exit status 1
	I0819 17:57:28.334566   32277 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 17:57:28.334580   32277 cni.go:84] Creating CNI manager for ""
	I0819 17:57:28.334586   32277 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 17:57:28.336199   32277 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 17:57:28.337342   32277 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 17:57:28.340505   32277 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 17:57:28.340530   32277 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 17:57:28.355907   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 17:57:28.535858   32277 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 17:57:28.536017   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:57:28.536068   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-142951 minikube.k8s.io/updated_at=2024_08_19T17_57_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411 minikube.k8s.io/name=addons-142951 minikube.k8s.io/primary=true
	I0819 17:57:28.542731   32277 ops.go:34] apiserver oom_adj: -16
	I0819 17:57:28.612555   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:57:29.113101   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:57:29.612824   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:57:30.113166   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:57:30.612923   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:57:31.113455   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:57:31.613599   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:57:32.113221   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:57:32.612910   32277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:57:32.670503   32277 kubeadm.go:1113] duration metric: took 4.134533777s to wait for elevateKubeSystemPrivileges
	I0819 17:57:32.670540   32277 kubeadm.go:394] duration metric: took 13.701926641s to StartCluster
	I0819 17:57:32.670563   32277 settings.go:142] acquiring lock: {Name:mkd30ec37009c3562b283392e8fb1c4131be31b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:32.670664   32277 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-24160/kubeconfig
	I0819 17:57:32.670984   32277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/kubeconfig: {Name:mk3fc9bc92b0be5459854fbe59603f93f92756ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:57:32.671151   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 17:57:32.671160   32277 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 17:57:32.671257   32277 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 17:57:32.671332   32277 config.go:182] Loaded profile config "addons-142951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:57:32.671366   32277 addons.go:69] Setting cloud-spanner=true in profile "addons-142951"
	I0819 17:57:32.671368   32277 addons.go:69] Setting default-storageclass=true in profile "addons-142951"
	I0819 17:57:32.671379   32277 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-142951"
	I0819 17:57:32.671384   32277 addons.go:69] Setting metrics-server=true in profile "addons-142951"
	I0819 17:57:32.671398   32277 addons.go:234] Setting addon cloud-spanner=true in "addons-142951"
	I0819 17:57:32.671401   32277 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-142951"
	I0819 17:57:32.671400   32277 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-142951"
	I0819 17:57:32.671399   32277 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-142951"
	I0819 17:57:32.671423   32277 addons.go:234] Setting addon metrics-server=true in "addons-142951"
	I0819 17:57:32.671410   32277 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-142951"
	I0819 17:57:32.671437   32277 addons.go:69] Setting volumesnapshots=true in profile "addons-142951"
	I0819 17:57:32.671437   32277 addons.go:69] Setting volcano=true in profile "addons-142951"
	I0819 17:57:32.671451   32277 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-142951"
	I0819 17:57:32.671454   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.671469   32277 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-142951"
	I0819 17:57:32.671474   32277 addons.go:234] Setting addon volcano=true in "addons-142951"
	I0819 17:57:32.671491   32277 addons.go:69] Setting registry=true in profile "addons-142951"
	I0819 17:57:32.671493   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.671500   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.671511   32277 addons.go:234] Setting addon registry=true in "addons-142951"
	I0819 17:57:32.671544   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.671562   32277 addons.go:69] Setting storage-provisioner=true in profile "addons-142951"
	I0819 17:57:32.671582   32277 addons.go:234] Setting addon storage-provisioner=true in "addons-142951"
	I0819 17:57:32.671591   32277 addons.go:69] Setting ingress=true in profile "addons-142951"
	I0819 17:57:32.671602   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.671621   32277 addons.go:234] Setting addon ingress=true in "addons-142951"
	I0819 17:57:32.671675   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.671785   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.671791   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.671950   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.671454   32277 addons.go:234] Setting addon volumesnapshots=true in "addons-142951"
	I0819 17:57:32.672001   32277 addons.go:69] Setting ingress-dns=true in profile "addons-142951"
	I0819 17:57:32.672023   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.672036   32277 addons.go:234] Setting addon ingress-dns=true in "addons-142951"
	I0819 17:57:32.672060   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.671425   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.672088   32277 addons.go:69] Setting gcp-auth=true in profile "addons-142951"
	I0819 17:57:32.672112   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.672128   32277 addons.go:69] Setting helm-tiller=true in profile "addons-142951"
	I0819 17:57:32.672154   32277 addons.go:234] Setting addon helm-tiller=true in "addons-142951"
	I0819 17:57:32.672179   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.671366   32277 addons.go:69] Setting yakd=true in profile "addons-142951"
	I0819 17:57:32.672410   32277 addons.go:234] Setting addon yakd=true in "addons-142951"
	I0819 17:57:32.672438   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.672536   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.672655   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.672698   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.671429   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.672840   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.673224   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.672071   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.677328   32277 out.go:177] * Verifying Kubernetes components...
	I0819 17:57:32.677531   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.671981   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.671960   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.672078   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.672117   32277 mustload.go:65] Loading cluster: addons-142951
	I0819 17:57:32.671986   32277 addons.go:69] Setting inspektor-gadget=true in profile "addons-142951"
	I0819 17:57:32.678684   32277 addons.go:234] Setting addon inspektor-gadget=true in "addons-142951"
	I0819 17:57:32.678722   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.679217   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.679330   32277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:57:32.701627   32277 config.go:182] Loaded profile config "addons-142951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 17:57:32.701945   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.719438   32277 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:57:32.719438   32277 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 17:57:32.720976   32277 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 17:57:32.721040   32277 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:57:32.721047   32277 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 17:57:32.721963   32277 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 17:57:32.721984   32277 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 17:57:32.722059   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.723062   32277 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 17:57:32.723512   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 17:57:32.723585   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.726105   32277 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 17:57:32.726107   32277 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 17:57:32.726207   32277 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 17:57:32.727327   32277 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 17:57:32.727342   32277 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 17:57:32.727358   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 17:57:32.727415   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.727568   32277 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 17:57:32.728722   32277 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 17:57:32.729086   32277 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 17:57:32.729100   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 17:57:32.729165   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.730237   32277 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 17:57:32.730376   32277 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 17:57:32.730388   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 17:57:32.730430   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.732185   32277 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 17:57:32.733293   32277 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 17:57:32.734323   32277 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 17:57:32.734931   32277 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-142951"
	I0819 17:57:32.734976   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.735359   32277 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 17:57:32.735409   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.738974   32277 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 17:57:32.739081   32277 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 17:57:32.739101   32277 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 17:57:32.739160   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.740153   32277 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 17:57:32.740171   32277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 17:57:32.740230   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.750583   32277 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 17:57:32.750696   32277 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 17:57:32.751803   32277 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 17:57:32.751828   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 17:57:32.751877   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.752185   32277 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 17:57:32.752204   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 17:57:32.752250   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.760898   32277 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0819 17:57:32.764077   32277 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0819 17:57:32.764102   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0819 17:57:32.764162   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.765303   32277 addons.go:234] Setting addon default-storageclass=true in "addons-142951"
	I0819 17:57:32.765346   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.765810   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:32.773148   32277 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 17:57:32.774810   32277 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 17:57:32.774833   32277 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 17:57:32.774897   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.785271   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.800919   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.801465   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.811101   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.811773   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.816295   32277 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 17:57:32.817438   32277 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 17:57:32.817456   32277 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 17:57:32.817526   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.819429   32277 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 17:57:32.819593   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:32.820652   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.823981   32277 out.go:177]   - Using image docker.io/busybox:stable
	I0819 17:57:32.825381   32277 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 17:57:32.825400   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 17:57:32.825456   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.827837   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.829928   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.831013   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	W0819 17:57:32.832664   32277 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0819 17:57:32.835809   32277 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 17:57:32.835822   32277 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 17:57:32.835861   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:32.836405   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.837992   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 17:57:32.847301   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.847301   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.849684   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.855066   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:32.957616   32277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:57:33.165902   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 17:57:33.175062   32277 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 17:57:33.175084   32277 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 17:57:33.258509   32277 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 17:57:33.258550   32277 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 17:57:33.259813   32277 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 17:57:33.259841   32277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 17:57:33.269815   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 17:57:33.270309   32277 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 17:57:33.270324   32277 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 17:57:33.273225   32277 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 17:57:33.273245   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 17:57:33.277593   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 17:57:33.277968   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 17:57:33.366699   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 17:57:33.378073   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 17:57:33.459712   32277 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 17:57:33.459785   32277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 17:57:33.464489   32277 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 17:57:33.464513   32277 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 17:57:33.464827   32277 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0819 17:57:33.464843   32277 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0819 17:57:33.465036   32277 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 17:57:33.465047   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 17:57:33.465897   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 17:57:33.477562   32277 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 17:57:33.477627   32277 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 17:57:33.479134   32277 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 17:57:33.479196   32277 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 17:57:33.573354   32277 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 17:57:33.573433   32277 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 17:57:33.770675   32277 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 17:57:33.770745   32277 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0819 17:57:33.772243   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 17:57:33.773452   32277 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 17:57:33.773472   32277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 17:57:33.864649   32277 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 17:57:33.864722   32277 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 17:57:33.871377   32277 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 17:57:33.871440   32277 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 17:57:33.958908   32277 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 17:57:33.958986   32277 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 17:57:33.962768   32277 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 17:57:33.962791   32277 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 17:57:34.074729   32277 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 17:57:34.074811   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 17:57:34.160017   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0819 17:57:34.179628   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 17:57:34.258604   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 17:57:34.259725   32277 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 17:57:34.259750   32277 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 17:57:34.265858   32277 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 17:57:34.265885   32277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 17:57:34.274508   32277 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.436483792s)
	I0819 17:57:34.274542   32277 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0819 17:57:34.275718   32277 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.318075078s)
	I0819 17:57:34.276596   32277 node_ready.go:35] waiting up to 6m0s for node "addons-142951" to be "Ready" ...
	I0819 17:57:34.358428   32277 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 17:57:34.358459   32277 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 17:57:34.657581   32277 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 17:57:34.657680   32277 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 17:57:34.674408   32277 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 17:57:34.674483   32277 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 17:57:35.072906   32277 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 17:57:35.072975   32277 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 17:57:35.164428   32277 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:57:35.164505   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 17:57:35.167545   32277 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 17:57:35.167621   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 17:57:35.380870   32277 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 17:57:35.380952   32277 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 17:57:35.476478   32277 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 17:57:35.476563   32277 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 17:57:35.659290   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:57:35.670663   32277 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-142951" context rescaled to 1 replicas
	I0819 17:57:35.771284   32277 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 17:57:35.771367   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 17:57:35.775428   32277 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 17:57:35.775492   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 17:57:35.978208   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 17:57:36.174228   32277 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 17:57:36.174311   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 17:57:36.381679   32277 node_ready.go:53] node "addons-142951" has status "Ready":"False"
	I0819 17:57:36.478604   32277 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 17:57:36.478678   32277 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 17:57:36.771239   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 17:57:38.782980   32277 node_ready.go:53] node "addons-142951" has status "Ready":"False"
	I0819 17:57:39.275787   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.109845881s)
	I0819 17:57:39.275825   32277 addons.go:475] Verifying addon ingress=true in "addons-142951"
	I0819 17:57:39.275999   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.006102577s)
	I0819 17:57:39.276070   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.99808321s)
	I0819 17:57:39.276042   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.998422643s)
	I0819 17:57:39.276142   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.909366582s)
	I0819 17:57:39.276188   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.898025973s)
	I0819 17:57:39.276267   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.810351191s)
	I0819 17:57:39.276307   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.504010124s)
	I0819 17:57:39.276318   32277 addons.go:475] Verifying addon registry=true in "addons-142951"
	I0819 17:57:39.276348   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.116238002s)
	I0819 17:57:39.276407   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.096750562s)
	I0819 17:57:39.276555   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.017905424s)
	I0819 17:57:39.276579   32277 addons.go:475] Verifying addon metrics-server=true in "addons-142951"
	I0819 17:57:39.277262   32277 out.go:177] * Verifying ingress addon...
	I0819 17:57:39.278168   32277 out.go:177] * Verifying registry addon...
	I0819 17:57:39.278199   32277 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-142951 service yakd-dashboard -n yakd-dashboard
	
	I0819 17:57:39.279728   32277 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 17:57:39.281119   32277 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 17:57:39.287753   32277 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 17:57:39.287813   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:39.288003   32277 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 17:57:39.288023   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0819 17:57:39.360347   32277 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0819 17:57:39.783373   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:39.783807   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:40.061680   32277 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 17:57:40.061768   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:40.076423   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.098124831s)
	I0819 17:57:40.076572   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.417183138s)
	W0819 17:57:40.076619   32277 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 17:57:40.076652   32277 retry.go:31] will retry after 214.60233ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 17:57:40.082675   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:40.263534   32277 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 17:57:40.283312   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:40.283406   32277 addons.go:234] Setting addon gcp-auth=true in "addons-142951"
	I0819 17:57:40.283469   32277 host.go:66] Checking if "addons-142951" exists ...
	I0819 17:57:40.283792   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:40.283853   32277 cli_runner.go:164] Run: docker container inspect addons-142951 --format={{.State.Status}}
	I0819 17:57:40.291432   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:57:40.304535   32277 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 17:57:40.304590   32277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-142951
	I0819 17:57:40.323802   32277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/addons-142951/id_rsa Username:docker}
	I0819 17:57:40.598125   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.826789175s)
	I0819 17:57:40.598169   32277 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-142951"
	I0819 17:57:40.599653   32277 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 17:57:40.601695   32277 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 17:57:40.603926   32277 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 17:57:40.603940   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:40.782917   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:40.783514   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:41.161604   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:41.280921   32277 node_ready.go:53] node "addons-142951" has status "Ready":"False"
	I0819 17:57:41.283698   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:41.283795   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:41.605411   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:41.782618   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:41.783651   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:42.106231   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:42.283153   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:42.283351   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:42.605385   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:42.783700   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:42.784083   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:43.104301   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:43.260614   32277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.969137296s)
	I0819 17:57:43.260643   32277 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.956079566s)
	I0819 17:57:43.262461   32277 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:57:43.263628   32277 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 17:57:43.265034   32277 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 17:57:43.265059   32277 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 17:57:43.282844   32277 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 17:57:43.282864   32277 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 17:57:43.282920   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:43.283357   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:43.299098   32277 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 17:57:43.299121   32277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 17:57:43.315237   32277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 17:57:43.606663   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:43.778948   32277 node_ready.go:53] node "addons-142951" has status "Ready":"False"
	I0819 17:57:43.783539   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:43.784429   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:43.875331   32277 addons.go:475] Verifying addon gcp-auth=true in "addons-142951"
	I0819 17:57:43.876684   32277 out.go:177] * Verifying gcp-auth addon...
	I0819 17:57:43.878951   32277 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 17:57:43.885116   32277 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 17:57:43.885156   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:44.105015   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:44.282976   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:44.284026   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:44.382311   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:44.604697   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:44.783314   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:44.783355   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:44.883146   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:45.105203   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:45.282721   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:45.283568   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:45.381671   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:45.605392   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:45.779643   32277 node_ready.go:53] node "addons-142951" has status "Ready":"False"
	I0819 17:57:45.783205   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:45.783426   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:45.881749   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:46.105672   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:46.282974   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:46.283794   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:46.382031   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:46.604427   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:46.783034   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:46.783857   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:46.882571   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:47.105541   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:47.283021   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:47.283243   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:47.381597   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:47.605172   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:47.779687   32277 node_ready.go:53] node "addons-142951" has status "Ready":"False"
	I0819 17:57:47.782933   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:47.783260   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:47.881682   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:48.105293   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:48.282768   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:48.283031   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:48.382199   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:48.605106   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:48.782765   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:48.783891   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:48.882154   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:49.104784   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:49.282339   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:49.283377   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:49.381702   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:49.605179   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:49.782691   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:49.783661   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:49.881779   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:50.105489   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:50.279778   32277 node_ready.go:53] node "addons-142951" has status "Ready":"False"
	I0819 17:57:50.282303   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:50.283182   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:50.381346   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:50.604604   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:50.783070   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:50.783462   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:50.881654   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:51.105410   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:51.279451   32277 node_ready.go:49] node "addons-142951" has status "Ready":"True"
	I0819 17:57:51.279479   32277 node_ready.go:38] duration metric: took 17.002843434s for node "addons-142951" to be "Ready" ...
	I0819 17:57:51.279490   32277 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:57:51.284892   32277 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 17:57:51.284914   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:51.285706   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:51.287598   32277 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fc8vt" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:51.381615   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:51.605713   32277 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 17:57:51.605733   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:51.786707   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:51.786835   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:51.883638   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:52.106288   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:52.283599   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:52.283768   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:52.383436   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:52.606392   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:52.784211   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:52.784587   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:52.881975   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:53.106284   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:53.284061   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:53.284082   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:53.291576   32277 pod_ready.go:93] pod "coredns-6f6b679f8f-fc8vt" in "kube-system" namespace has status "Ready":"True"
	I0819 17:57:53.291595   32277 pod_ready.go:82] duration metric: took 2.003963947s for pod "coredns-6f6b679f8f-fc8vt" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.291618   32277 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-142951" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.295360   32277 pod_ready.go:93] pod "etcd-addons-142951" in "kube-system" namespace has status "Ready":"True"
	I0819 17:57:53.295377   32277 pod_ready.go:82] duration metric: took 3.753458ms for pod "etcd-addons-142951" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.295388   32277 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-142951" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.299720   32277 pod_ready.go:93] pod "kube-apiserver-addons-142951" in "kube-system" namespace has status "Ready":"True"
	I0819 17:57:53.299736   32277 pod_ready.go:82] duration metric: took 4.342068ms for pod "kube-apiserver-addons-142951" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.299747   32277 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-142951" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.303277   32277 pod_ready.go:93] pod "kube-controller-manager-addons-142951" in "kube-system" namespace has status "Ready":"True"
	I0819 17:57:53.303292   32277 pod_ready.go:82] duration metric: took 3.538469ms for pod "kube-controller-manager-addons-142951" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.303301   32277 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-q94sk" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.306438   32277 pod_ready.go:93] pod "kube-proxy-q94sk" in "kube-system" namespace has status "Ready":"True"
	I0819 17:57:53.306456   32277 pod_ready.go:82] duration metric: took 3.147987ms for pod "kube-proxy-q94sk" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.306465   32277 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-142951" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.381797   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:53.605681   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:53.690272   32277 pod_ready.go:93] pod "kube-scheduler-addons-142951" in "kube-system" namespace has status "Ready":"True"
	I0819 17:57:53.690295   32277 pod_ready.go:82] duration metric: took 383.821034ms for pod "kube-scheduler-addons-142951" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.690307   32277 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace to be "Ready" ...
	I0819 17:57:53.783926   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:53.783925   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:53.881993   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:54.160304   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:54.284087   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:54.284413   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:54.382229   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:54.606437   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:54.783431   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:54.784576   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:54.882066   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:55.106444   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:55.283729   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:55.285215   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:55.381956   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:55.605773   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:55.695562   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:57:55.783858   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:55.783909   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:55.882955   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:56.105586   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:56.283698   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:56.283697   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:56.382127   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:56.605857   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:56.783133   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:56.784098   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:56.882613   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:57.106986   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:57.283744   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:57.283809   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:57.382122   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:57.661370   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:57.761948   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:57:57.783840   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:57.784594   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:57.882708   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:58.162177   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:58.283823   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:58.284193   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:58.382764   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:58.606798   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:58.784472   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:58.784623   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:58.882128   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:59.105646   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:59.285080   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:59.285438   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:59.382750   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:57:59.607013   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:57:59.783553   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:57:59.783738   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:57:59.882468   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:00.105737   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:00.195852   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:00.285239   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:00.285242   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:00.385036   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:00.605452   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:00.783819   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:00.785292   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:00.882860   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:01.106799   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:01.284180   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:01.284275   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:01.382922   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:01.606398   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:01.784121   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:01.784700   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:01.883064   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:02.106536   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:02.284514   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:02.284610   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:02.382212   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:02.605266   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:02.695460   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:02.784222   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:02.784238   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:02.882411   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:03.106926   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:03.284028   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:03.284202   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:03.381890   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:03.605389   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:03.783985   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:03.784088   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:03.881786   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:04.106653   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:04.283660   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:04.284725   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:04.382326   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:04.605675   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:04.783864   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:04.784149   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:04.882388   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:05.163469   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:05.259246   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:05.283891   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:05.284489   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:05.381716   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:05.606736   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:05.784014   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:05.784036   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:05.882374   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:06.106522   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:06.283533   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:06.284393   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:06.383061   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:06.605832   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:06.784203   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:06.785737   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:06.881915   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:07.106538   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:07.283917   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:07.283918   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:07.383668   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:07.606361   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:07.695600   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:07.784072   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:07.784398   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:07.882479   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:08.105732   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:08.284066   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:08.284555   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:08.381725   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:08.605931   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:08.784101   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:08.784128   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:08.882183   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:09.105424   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:09.284435   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:09.284555   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:09.382179   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:09.606501   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:09.696118   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:09.783530   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:09.784242   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:09.883153   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:10.105734   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:10.284130   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:10.284403   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:10.382390   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:10.605990   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:10.784445   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:10.784870   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:10.882394   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:11.106389   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:11.283900   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:11.283950   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:11.381895   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:11.606066   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:11.783685   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:11.783946   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:11.881788   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:12.106254   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:12.195338   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:12.283693   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:12.283869   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:12.381878   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:12.605434   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:12.783118   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:12.783989   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:12.882526   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:13.105805   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:13.283589   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:13.283744   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:13.381816   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:13.605142   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:13.783801   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:13.784054   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:13.882412   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:14.105803   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:14.195744   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:14.284671   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:14.285749   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:14.382752   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:14.669466   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:14.866697   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:14.867989   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:14.964367   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:15.162584   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:15.372426   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:15.372772   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:15.458984   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:15.661342   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:15.784378   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:15.785113   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:15.882202   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:16.106230   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:16.284546   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:16.285002   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:16.382169   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:16.606167   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:16.695578   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:16.784446   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:16.784491   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:16.883532   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:17.106570   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:17.284007   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:17.285190   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:17.382586   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:17.606311   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:17.783847   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:17.784589   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:17.882406   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:18.106210   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:18.284511   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:18.284783   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:18.382200   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:18.606482   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:18.695824   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:18.783650   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:18.784719   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:18.882070   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:19.106342   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:19.284532   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:19.285003   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:19.383865   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:19.606452   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:19.783466   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:19.784445   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:19.882228   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:20.105407   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:20.284332   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:20.284588   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:20.383631   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:20.606204   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:20.695944   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:20.784199   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:20.784260   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:20.882985   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:21.105484   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:21.283267   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:21.286356   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:21.385631   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:21.606267   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:21.783975   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:21.784101   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:21.882388   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:22.106276   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:22.285469   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:22.285801   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:22.382694   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:22.606850   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:22.758972   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:22.784306   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:22.784360   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:22.882381   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:23.105508   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:23.283970   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:23.284128   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:23.383406   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:23.606089   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:23.782938   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:23.784586   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:23.881729   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:24.106940   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:24.284448   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:24.284798   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:24.383610   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:24.606471   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:24.784127   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:24.784184   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:24.882340   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:25.105574   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:25.196864   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:25.284874   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:25.285496   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:25.382296   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:25.606454   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:25.783873   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:25.784000   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:25.882222   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:26.105897   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:26.284181   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:26.285636   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:26.382337   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:26.605872   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:26.783672   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:26.783742   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:26.881951   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:27.106280   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:27.283935   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:27.285007   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:27.382568   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:27.606382   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:27.695131   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:27.783674   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:27.784021   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:27.882640   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:28.106891   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:28.283392   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:28.283741   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:28.382396   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:28.606796   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:28.784242   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:58:28.784380   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:28.882450   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:29.109480   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:29.284301   32277 kapi.go:107] duration metric: took 50.003181765s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 17:58:29.284588   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:29.382552   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:29.630400   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:29.695696   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:29.784186   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:29.882146   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:30.105776   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:30.283070   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:30.382943   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:30.605981   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:30.866609   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:30.883148   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:58:31.162115   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:31.284478   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:31.384117   32277 kapi.go:107] duration metric: took 47.505166867s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 17:58:31.385403   32277 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-142951 cluster.
	I0819 17:58:31.386562   32277 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 17:58:31.387674   32277 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 17:58:31.661911   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:31.764465   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:31.784429   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:32.161583   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:32.283906   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:32.661988   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:32.783565   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:33.106746   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:33.283351   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:33.605926   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:33.783752   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:34.106282   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:34.195709   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:34.284424   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:34.606799   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:34.783863   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:35.105912   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:35.284048   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:35.605731   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:35.783708   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:36.107688   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:36.197021   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:36.284162   32277 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:58:36.661425   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:36.783864   32277 kapi.go:107] duration metric: took 57.504134819s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 17:58:37.106192   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:37.662531   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:38.106463   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:38.605899   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:38.695981   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:39.105528   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:39.606629   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:40.105235   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:40.605724   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:41.106811   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:41.196632   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:41.606014   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:42.106421   32277 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:58:42.605546   32277 kapi.go:107] duration metric: took 1m2.003851904s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 17:58:42.607331   32277 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, nvidia-device-plugin, storage-provisioner, helm-tiller, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0819 17:58:42.608458   32277 addons.go:510] duration metric: took 1m9.937212955s for enable addons: enabled=[ingress-dns cloud-spanner nvidia-device-plugin storage-provisioner helm-tiller metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0819 17:58:43.695594   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:46.194810   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:48.195487   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:50.195627   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:52.195921   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:54.695446   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:56.695594   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:58:58.696153   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:01.194762   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:03.195871   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:05.695908   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:08.195682   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:10.695200   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:12.695785   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:14.785744   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:17.195000   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:19.195992   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:21.695558   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:23.695822   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:26.195163   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:28.695640   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:31.194980   32277 pod_ready.go:103] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"False"
	I0819 17:59:31.695647   32277 pod_ready.go:93] pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace has status "Ready":"True"
	I0819 17:59:31.695667   32277 pod_ready.go:82] duration metric: took 1m38.005353358s for pod "metrics-server-8988944d9-hggkq" in "kube-system" namespace to be "Ready" ...
	I0819 17:59:31.695677   32277 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-bc72h" in "kube-system" namespace to be "Ready" ...
	I0819 17:59:31.699319   32277 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-bc72h" in "kube-system" namespace has status "Ready":"True"
	I0819 17:59:31.699335   32277 pod_ready.go:82] duration metric: took 3.65301ms for pod "nvidia-device-plugin-daemonset-bc72h" in "kube-system" namespace to be "Ready" ...
	I0819 17:59:31.699352   32277 pod_ready.go:39] duration metric: took 1m40.419848821s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:59:31.699367   32277 api_server.go:52] waiting for apiserver process to appear ...
	I0819 17:59:31.699393   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 17:59:31.699433   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 17:59:31.731403   32277 cri.go:89] found id: "5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350"
	I0819 17:59:31.731426   32277 cri.go:89] found id: ""
	I0819 17:59:31.731434   32277 logs.go:276] 1 containers: [5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350]
	I0819 17:59:31.731478   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:31.734590   32277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 17:59:31.734649   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 17:59:31.765868   32277 cri.go:89] found id: "bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f"
	I0819 17:59:31.765890   32277 cri.go:89] found id: ""
	I0819 17:59:31.765897   32277 logs.go:276] 1 containers: [bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f]
	I0819 17:59:31.765941   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:31.768992   32277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 17:59:31.769040   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 17:59:31.799498   32277 cri.go:89] found id: "bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8"
	I0819 17:59:31.799519   32277 cri.go:89] found id: ""
	I0819 17:59:31.799526   32277 logs.go:276] 1 containers: [bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8]
	I0819 17:59:31.799572   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:31.802512   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 17:59:31.802558   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 17:59:31.833492   32277 cri.go:89] found id: "a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c"
	I0819 17:59:31.833509   32277 cri.go:89] found id: ""
	I0819 17:59:31.833518   32277 logs.go:276] 1 containers: [a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c]
	I0819 17:59:31.833566   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:31.836521   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 17:59:31.836569   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 17:59:31.867200   32277 cri.go:89] found id: "da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97"
	I0819 17:59:31.867229   32277 cri.go:89] found id: ""
	I0819 17:59:31.867239   32277 logs.go:276] 1 containers: [da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97]
	I0819 17:59:31.867288   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:31.870451   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 17:59:31.870501   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 17:59:31.901646   32277 cri.go:89] found id: "7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664"
	I0819 17:59:31.901665   32277 cri.go:89] found id: ""
	I0819 17:59:31.901673   32277 logs.go:276] 1 containers: [7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664]
	I0819 17:59:31.901713   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:31.904652   32277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 17:59:31.904707   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 17:59:31.935321   32277 cri.go:89] found id: "f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf"
	I0819 17:59:31.935339   32277 cri.go:89] found id: ""
	I0819 17:59:31.935349   32277 logs.go:276] 1 containers: [f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf]
	I0819 17:59:31.935398   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:31.938352   32277 logs.go:123] Gathering logs for kube-controller-manager [7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664] ...
	I0819 17:59:31.938371   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664"
	I0819 17:59:31.993944   32277 logs.go:123] Gathering logs for container status ...
	I0819 17:59:31.993974   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 17:59:32.032005   32277 logs.go:123] Gathering logs for kubelet ...
	I0819 17:59:32.032033   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 17:59:32.098309   32277 logs.go:123] Gathering logs for describe nodes ...
	I0819 17:59:32.098347   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 17:59:32.191911   32277 logs.go:123] Gathering logs for kube-apiserver [5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350] ...
	I0819 17:59:32.191936   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350"
	I0819 17:59:32.234689   32277 logs.go:123] Gathering logs for kube-scheduler [a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c] ...
	I0819 17:59:32.234713   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c"
	I0819 17:59:32.270838   32277 logs.go:123] Gathering logs for kube-proxy [da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97] ...
	I0819 17:59:32.270867   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97"
	I0819 17:59:32.301703   32277 logs.go:123] Gathering logs for kindnet [f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf] ...
	I0819 17:59:32.301728   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf"
	I0819 17:59:32.339842   32277 logs.go:123] Gathering logs for CRI-O ...
	I0819 17:59:32.339868   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 17:59:32.411461   32277 logs.go:123] Gathering logs for dmesg ...
	I0819 17:59:32.411496   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 17:59:32.423149   32277 logs.go:123] Gathering logs for etcd [bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f] ...
	I0819 17:59:32.423171   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f"
	I0819 17:59:32.478030   32277 logs.go:123] Gathering logs for coredns [bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8] ...
	I0819 17:59:32.478060   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8"
	I0819 17:59:35.010801   32277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:59:35.024410   32277 api_server.go:72] duration metric: took 2m2.353215137s to wait for apiserver process to appear ...
	I0819 17:59:35.024435   32277 api_server.go:88] waiting for apiserver healthz status ...
	I0819 17:59:35.024464   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 17:59:35.024517   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 17:59:35.056105   32277 cri.go:89] found id: "5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350"
	I0819 17:59:35.056126   32277 cri.go:89] found id: ""
	I0819 17:59:35.056134   32277 logs.go:276] 1 containers: [5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350]
	I0819 17:59:35.056173   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:35.059369   32277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 17:59:35.059434   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 17:59:35.090794   32277 cri.go:89] found id: "bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f"
	I0819 17:59:35.090817   32277 cri.go:89] found id: ""
	I0819 17:59:35.090826   32277 logs.go:276] 1 containers: [bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f]
	I0819 17:59:35.090873   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:35.093994   32277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 17:59:35.094057   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 17:59:35.125952   32277 cri.go:89] found id: "bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8"
	I0819 17:59:35.125972   32277 cri.go:89] found id: ""
	I0819 17:59:35.125979   32277 logs.go:276] 1 containers: [bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8]
	I0819 17:59:35.126018   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:35.129227   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 17:59:35.129337   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 17:59:35.161349   32277 cri.go:89] found id: "a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c"
	I0819 17:59:35.161386   32277 cri.go:89] found id: ""
	I0819 17:59:35.161395   32277 logs.go:276] 1 containers: [a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c]
	I0819 17:59:35.161445   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:35.164682   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 17:59:35.164745   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 17:59:35.197815   32277 cri.go:89] found id: "da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97"
	I0819 17:59:35.197837   32277 cri.go:89] found id: ""
	I0819 17:59:35.197845   32277 logs.go:276] 1 containers: [da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97]
	I0819 17:59:35.197889   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:35.200963   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 17:59:35.201013   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 17:59:35.233247   32277 cri.go:89] found id: "7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664"
	I0819 17:59:35.233265   32277 cri.go:89] found id: ""
	I0819 17:59:35.233272   32277 logs.go:276] 1 containers: [7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664]
	I0819 17:59:35.233312   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:35.236597   32277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 17:59:35.236666   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 17:59:35.269189   32277 cri.go:89] found id: "f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf"
	I0819 17:59:35.269210   32277 cri.go:89] found id: ""
	I0819 17:59:35.269217   32277 logs.go:276] 1 containers: [f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf]
	I0819 17:59:35.269264   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:35.272443   32277 logs.go:123] Gathering logs for dmesg ...
	I0819 17:59:35.272462   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 17:59:35.283559   32277 logs.go:123] Gathering logs for etcd [bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f] ...
	I0819 17:59:35.283584   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f"
	I0819 17:59:35.341413   32277 logs.go:123] Gathering logs for kube-scheduler [a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c] ...
	I0819 17:59:35.341442   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c"
	I0819 17:59:35.379012   32277 logs.go:123] Gathering logs for kube-proxy [da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97] ...
	I0819 17:59:35.379041   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97"
	I0819 17:59:35.411198   32277 logs.go:123] Gathering logs for kube-controller-manager [7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664] ...
	I0819 17:59:35.411224   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664"
	I0819 17:59:35.466988   32277 logs.go:123] Gathering logs for kindnet [f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf] ...
	I0819 17:59:35.467021   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf"
	I0819 17:59:35.507149   32277 logs.go:123] Gathering logs for kubelet ...
	I0819 17:59:35.507185   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 17:59:35.572561   32277 logs.go:123] Gathering logs for describe nodes ...
	I0819 17:59:35.572595   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 17:59:35.665635   32277 logs.go:123] Gathering logs for kube-apiserver [5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350] ...
	I0819 17:59:35.665662   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350"
	I0819 17:59:35.707679   32277 logs.go:123] Gathering logs for coredns [bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8] ...
	I0819 17:59:35.707708   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8"
	I0819 17:59:35.742132   32277 logs.go:123] Gathering logs for CRI-O ...
	I0819 17:59:35.742158   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 17:59:35.815470   32277 logs.go:123] Gathering logs for container status ...
	I0819 17:59:35.815505   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 17:59:38.357695   32277 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 17:59:38.361111   32277 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0819 17:59:38.361947   32277 api_server.go:141] control plane version: v1.31.0
	I0819 17:59:38.361967   32277 api_server.go:131] duration metric: took 3.337527252s to wait for apiserver health ...
	I0819 17:59:38.361975   32277 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 17:59:38.361997   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 17:59:38.362043   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 17:59:38.393619   32277 cri.go:89] found id: "5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350"
	I0819 17:59:38.393636   32277 cri.go:89] found id: ""
	I0819 17:59:38.393644   32277 logs.go:276] 1 containers: [5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350]
	I0819 17:59:38.393689   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:38.396602   32277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 17:59:38.396652   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 17:59:38.426870   32277 cri.go:89] found id: "bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f"
	I0819 17:59:38.426893   32277 cri.go:89] found id: ""
	I0819 17:59:38.426901   32277 logs.go:276] 1 containers: [bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f]
	I0819 17:59:38.426943   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:38.430132   32277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 17:59:38.430189   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 17:59:38.461329   32277 cri.go:89] found id: "bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8"
	I0819 17:59:38.461345   32277 cri.go:89] found id: ""
	I0819 17:59:38.461352   32277 logs.go:276] 1 containers: [bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8]
	I0819 17:59:38.461389   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:38.464280   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 17:59:38.464326   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 17:59:38.495248   32277 cri.go:89] found id: "a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c"
	I0819 17:59:38.495268   32277 cri.go:89] found id: ""
	I0819 17:59:38.495278   32277 logs.go:276] 1 containers: [a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c]
	I0819 17:59:38.495318   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:38.498293   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 17:59:38.498348   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 17:59:38.529762   32277 cri.go:89] found id: "da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97"
	I0819 17:59:38.529787   32277 cri.go:89] found id: ""
	I0819 17:59:38.529797   32277 logs.go:276] 1 containers: [da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97]
	I0819 17:59:38.529840   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:38.532733   32277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 17:59:38.532778   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 17:59:38.563652   32277 cri.go:89] found id: "7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664"
	I0819 17:59:38.563673   32277 cri.go:89] found id: ""
	I0819 17:59:38.563682   32277 logs.go:276] 1 containers: [7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664]
	I0819 17:59:38.563734   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:38.566769   32277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 17:59:38.566817   32277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 17:59:38.598710   32277 cri.go:89] found id: "f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf"
	I0819 17:59:38.598733   32277 cri.go:89] found id: ""
	I0819 17:59:38.598742   32277 logs.go:276] 1 containers: [f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf]
	I0819 17:59:38.598792   32277 ssh_runner.go:195] Run: which crictl
	I0819 17:59:38.601802   32277 logs.go:123] Gathering logs for describe nodes ...
	I0819 17:59:38.601824   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 17:59:38.688508   32277 logs.go:123] Gathering logs for kube-apiserver [5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350] ...
	I0819 17:59:38.688532   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350"
	I0819 17:59:38.731332   32277 logs.go:123] Gathering logs for kube-controller-manager [7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664] ...
	I0819 17:59:38.731357   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664"
	I0819 17:59:38.784927   32277 logs.go:123] Gathering logs for kindnet [f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf] ...
	I0819 17:59:38.784952   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf"
	I0819 17:59:38.821388   32277 logs.go:123] Gathering logs for CRI-O ...
	I0819 17:59:38.821412   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 17:59:38.892973   32277 logs.go:123] Gathering logs for container status ...
	I0819 17:59:38.892998   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 17:59:38.931248   32277 logs.go:123] Gathering logs for dmesg ...
	I0819 17:59:38.931272   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 17:59:38.942339   32277 logs.go:123] Gathering logs for etcd [bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f] ...
	I0819 17:59:38.942359   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f"
	I0819 17:59:38.997979   32277 logs.go:123] Gathering logs for coredns [bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8] ...
	I0819 17:59:38.998004   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8"
	I0819 17:59:39.031258   32277 logs.go:123] Gathering logs for kube-scheduler [a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c] ...
	I0819 17:59:39.031284   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c"
	I0819 17:59:39.067004   32277 logs.go:123] Gathering logs for kube-proxy [da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97] ...
	I0819 17:59:39.067030   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97"
	I0819 17:59:39.097168   32277 logs.go:123] Gathering logs for kubelet ...
	I0819 17:59:39.097196   32277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 17:59:41.669508   32277 system_pods.go:59] 19 kube-system pods found
	I0819 17:59:41.669534   32277 system_pods.go:61] "coredns-6f6b679f8f-fc8vt" [ebed1ffd-53d1-4366-bdc1-29fb14ddbefb] Running
	I0819 17:59:41.669539   32277 system_pods.go:61] "csi-hostpath-attacher-0" [fd596b06-341f-47f2-a8da-e7dc64f41141] Running
	I0819 17:59:41.669543   32277 system_pods.go:61] "csi-hostpath-resizer-0" [559fd1c1-bd37-424c-84e0-dc698b2aed5d] Running
	I0819 17:59:41.669547   32277 system_pods.go:61] "csi-hostpathplugin-dl2zv" [d5dbe9eb-40e4-493c-86e6-0b23dcd5368a] Running
	I0819 17:59:41.669551   32277 system_pods.go:61] "etcd-addons-142951" [d3b8f60a-0668-45e1-ab67-58b8f3bb4b6f] Running
	I0819 17:59:41.669555   32277 system_pods.go:61] "kindnet-v2xdp" [d80fcb4a-57b1-4a3f-a374-cf3eb49eaad9] Running
	I0819 17:59:41.669558   32277 system_pods.go:61] "kube-apiserver-addons-142951" [647bfa2f-59b3-40c4-9441-6c585868606c] Running
	I0819 17:59:41.669562   32277 system_pods.go:61] "kube-controller-manager-addons-142951" [fcf6b86a-dc2d-481c-b766-4acd5fabca72] Running
	I0819 17:59:41.669565   32277 system_pods.go:61] "kube-ingress-dns-minikube" [c80a5549-ce5e-4dcd-adca-60388c15eb01] Running
	I0819 17:59:41.669568   32277 system_pods.go:61] "kube-proxy-q94sk" [67c62ce2-b009-4e1e-b458-a932b2d8bda0] Running
	I0819 17:59:41.669572   32277 system_pods.go:61] "kube-scheduler-addons-142951" [62d2cbec-17d2-4363-a160-36caaa89544a] Running
	I0819 17:59:41.669575   32277 system_pods.go:61] "metrics-server-8988944d9-hggkq" [0dca4d1b-5042-4c63-b3e2-04f12c5f19a8] Running
	I0819 17:59:41.669578   32277 system_pods.go:61] "nvidia-device-plugin-daemonset-bc72h" [1afb0b8d-3754-410e-886b-723b6ec99725] Running
	I0819 17:59:41.669582   32277 system_pods.go:61] "registry-6fb4cdfc84-mflg4" [7a8a2fd6-50f4-4941-a77a-aa97fe6fde07] Running
	I0819 17:59:41.669587   32277 system_pods.go:61] "registry-proxy-cpszr" [4104108c-9aa8-4ddc-b4ab-13ffb2364b83] Running
	I0819 17:59:41.669590   32277 system_pods.go:61] "snapshot-controller-56fcc65765-bm9q4" [be809ad2-1210-4bd5-9d06-c1fc540796ef] Running
	I0819 17:59:41.669592   32277 system_pods.go:61] "snapshot-controller-56fcc65765-mrg2k" [9bb61f05-1178-4693-827c-8ec9467bb365] Running
	I0819 17:59:41.669596   32277 system_pods.go:61] "storage-provisioner" [22cafd60-bf3d-43f0-89cd-7cd1ed607e0a] Running
	I0819 17:59:41.669601   32277 system_pods.go:61] "tiller-deploy-b48cc5f79-gjp98" [c259324f-94be-46a4-9f28-bb1278b517b6] Running
	I0819 17:59:41.669608   32277 system_pods.go:74] duration metric: took 3.307626785s to wait for pod list to return data ...
	I0819 17:59:41.669616   32277 default_sa.go:34] waiting for default service account to be created ...
	I0819 17:59:41.671548   32277 default_sa.go:45] found service account: "default"
	I0819 17:59:41.671570   32277 default_sa.go:55] duration metric: took 1.948088ms for default service account to be created ...
	I0819 17:59:41.671577   32277 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 17:59:41.678997   32277 system_pods.go:86] 19 kube-system pods found
	I0819 17:59:41.679017   32277 system_pods.go:89] "coredns-6f6b679f8f-fc8vt" [ebed1ffd-53d1-4366-bdc1-29fb14ddbefb] Running
	I0819 17:59:41.679024   32277 system_pods.go:89] "csi-hostpath-attacher-0" [fd596b06-341f-47f2-a8da-e7dc64f41141] Running
	I0819 17:59:41.679028   32277 system_pods.go:89] "csi-hostpath-resizer-0" [559fd1c1-bd37-424c-84e0-dc698b2aed5d] Running
	I0819 17:59:41.679032   32277 system_pods.go:89] "csi-hostpathplugin-dl2zv" [d5dbe9eb-40e4-493c-86e6-0b23dcd5368a] Running
	I0819 17:59:41.679035   32277 system_pods.go:89] "etcd-addons-142951" [d3b8f60a-0668-45e1-ab67-58b8f3bb4b6f] Running
	I0819 17:59:41.679038   32277 system_pods.go:89] "kindnet-v2xdp" [d80fcb4a-57b1-4a3f-a374-cf3eb49eaad9] Running
	I0819 17:59:41.679042   32277 system_pods.go:89] "kube-apiserver-addons-142951" [647bfa2f-59b3-40c4-9441-6c585868606c] Running
	I0819 17:59:41.679045   32277 system_pods.go:89] "kube-controller-manager-addons-142951" [fcf6b86a-dc2d-481c-b766-4acd5fabca72] Running
	I0819 17:59:41.679049   32277 system_pods.go:89] "kube-ingress-dns-minikube" [c80a5549-ce5e-4dcd-adca-60388c15eb01] Running
	I0819 17:59:41.679055   32277 system_pods.go:89] "kube-proxy-q94sk" [67c62ce2-b009-4e1e-b458-a932b2d8bda0] Running
	I0819 17:59:41.679058   32277 system_pods.go:89] "kube-scheduler-addons-142951" [62d2cbec-17d2-4363-a160-36caaa89544a] Running
	I0819 17:59:41.679062   32277 system_pods.go:89] "metrics-server-8988944d9-hggkq" [0dca4d1b-5042-4c63-b3e2-04f12c5f19a8] Running
	I0819 17:59:41.679066   32277 system_pods.go:89] "nvidia-device-plugin-daemonset-bc72h" [1afb0b8d-3754-410e-886b-723b6ec99725] Running
	I0819 17:59:41.679069   32277 system_pods.go:89] "registry-6fb4cdfc84-mflg4" [7a8a2fd6-50f4-4941-a77a-aa97fe6fde07] Running
	I0819 17:59:41.679072   32277 system_pods.go:89] "registry-proxy-cpszr" [4104108c-9aa8-4ddc-b4ab-13ffb2364b83] Running
	I0819 17:59:41.679075   32277 system_pods.go:89] "snapshot-controller-56fcc65765-bm9q4" [be809ad2-1210-4bd5-9d06-c1fc540796ef] Running
	I0819 17:59:41.679079   32277 system_pods.go:89] "snapshot-controller-56fcc65765-mrg2k" [9bb61f05-1178-4693-827c-8ec9467bb365] Running
	I0819 17:59:41.679083   32277 system_pods.go:89] "storage-provisioner" [22cafd60-bf3d-43f0-89cd-7cd1ed607e0a] Running
	I0819 17:59:41.679086   32277 system_pods.go:89] "tiller-deploy-b48cc5f79-gjp98" [c259324f-94be-46a4-9f28-bb1278b517b6] Running
	I0819 17:59:41.679092   32277 system_pods.go:126] duration metric: took 7.510262ms to wait for k8s-apps to be running ...
	I0819 17:59:41.679100   32277 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 17:59:41.679137   32277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:59:41.690170   32277 system_svc.go:56] duration metric: took 11.064294ms WaitForService to wait for kubelet
	I0819 17:59:41.690193   32277 kubeadm.go:582] duration metric: took 2m9.019003469s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:59:41.690215   32277 node_conditions.go:102] verifying NodePressure condition ...
	I0819 17:59:41.693301   32277 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0819 17:59:41.693330   32277 node_conditions.go:123] node cpu capacity is 8
	I0819 17:59:41.693345   32277 node_conditions.go:105] duration metric: took 3.124766ms to run NodePressure ...
	I0819 17:59:41.693358   32277 start.go:241] waiting for startup goroutines ...
	I0819 17:59:41.693368   32277 start.go:246] waiting for cluster config update ...
	I0819 17:59:41.693387   32277 start.go:255] writing updated cluster config ...
	I0819 17:59:41.693719   32277 ssh_runner.go:195] Run: rm -f paused
	I0819 17:59:41.739546   32277 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 17:59:41.741660   32277 out.go:177] * Done! kubectl is now configured to use "addons-142951" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 18:02:52 addons-142951 crio[1030]: time="2024-08-19 18:02:52.954828530Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-9qvr6 from CNI network \"kindnet\" (type=ptp)"
	Aug 19 18:02:52 addons-142951 crio[1030]: time="2024-08-19 18:02:52.986603032Z" level=info msg="Stopped pod sandbox: 8e99b220ce93a9c72be8b7aadd4059ae49cb5879876f35ec0546b39464f90427" id=03aaf25e-d8c1-4ef4-8021-dc4fd1a8f485 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:02:53 addons-142951 crio[1030]: time="2024-08-19 18:02:53.025741587Z" level=info msg="Removing container: e53684b8506edaa8e314693fe8a0fe0057187e4b6454777c04ae6de283f278e2" id=e715886f-e641-40e6-b8b9-4166f7e12111 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 18:02:53 addons-142951 crio[1030]: time="2024-08-19 18:02:53.037414881Z" level=info msg="Removed container e53684b8506edaa8e314693fe8a0fe0057187e4b6454777c04ae6de283f278e2: ingress-nginx/ingress-nginx-controller-bc57996ff-9qvr6/controller" id=e715886f-e641-40e6-b8b9-4166f7e12111 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 18:03:27 addons-142951 crio[1030]: time="2024-08-19 18:03:27.828630840Z" level=info msg="Removing container: c31e1c0337f3bacfa2d1a5a6115acc811fdd591cccc2072ee563cff81fc08017" id=043b719d-843c-4625-894a-e93c3f56ad24 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 18:03:27 addons-142951 crio[1030]: time="2024-08-19 18:03:27.840568411Z" level=info msg="Removed container c31e1c0337f3bacfa2d1a5a6115acc811fdd591cccc2072ee563cff81fc08017: ingress-nginx/ingress-nginx-admission-create-n4fmk/create" id=043b719d-843c-4625-894a-e93c3f56ad24 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 18:03:27 addons-142951 crio[1030]: time="2024-08-19 18:03:27.841639352Z" level=info msg="Removing container: 1fc3551382099c36139a5911b7fb04b6b336903035ba1e480c8362bb70c40f9f" id=d4028168-5522-449b-9c44-00913f4f739a name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 18:03:27 addons-142951 crio[1030]: time="2024-08-19 18:03:27.853791635Z" level=info msg="Removed container 1fc3551382099c36139a5911b7fb04b6b336903035ba1e480c8362bb70c40f9f: ingress-nginx/ingress-nginx-admission-patch-z9wtf/patch" id=d4028168-5522-449b-9c44-00913f4f739a name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 19 18:03:27 addons-142951 crio[1030]: time="2024-08-19 18:03:27.854909611Z" level=info msg="Stopping pod sandbox: b8ac4673d7b6f323f252f8084edc0566957798cd2de9ef3303bd365c03e2dffb" id=4e816ed0-15c8-49e7-be56-164b959b6cf2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:03:27 addons-142951 crio[1030]: time="2024-08-19 18:03:27.854942325Z" level=info msg="Stopped pod sandbox (already stopped): b8ac4673d7b6f323f252f8084edc0566957798cd2de9ef3303bd365c03e2dffb" id=4e816ed0-15c8-49e7-be56-164b959b6cf2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:03:27 addons-142951 crio[1030]: time="2024-08-19 18:03:27.855143578Z" level=info msg="Removing pod sandbox: b8ac4673d7b6f323f252f8084edc0566957798cd2de9ef3303bd365c03e2dffb" id=f4b209fc-044b-49ed-8d75-8815b5a9d1b7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 18:03:27 addons-142951 crio[1030]: time="2024-08-19 18:03:27.861314922Z" level=info msg="Removed pod sandbox: b8ac4673d7b6f323f252f8084edc0566957798cd2de9ef3303bd365c03e2dffb" id=f4b209fc-044b-49ed-8d75-8815b5a9d1b7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 18:03:27 addons-142951 crio[1030]: time="2024-08-19 18:03:27.861611492Z" level=info msg="Stopping pod sandbox: 93712d78564861d8f3524169666e2ba81bb13bdff48583f6d5e3b3a31fe44961" id=04da77a8-e687-4735-bc96-8049ad23e07b name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:03:27 addons-142951 crio[1030]: time="2024-08-19 18:03:27.861638867Z" level=info msg="Stopped pod sandbox (already stopped): 93712d78564861d8f3524169666e2ba81bb13bdff48583f6d5e3b3a31fe44961" id=04da77a8-e687-4735-bc96-8049ad23e07b name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:03:27 addons-142951 crio[1030]: time="2024-08-19 18:03:27.861855581Z" level=info msg="Removing pod sandbox: 93712d78564861d8f3524169666e2ba81bb13bdff48583f6d5e3b3a31fe44961" id=ec53655a-d5e3-4d92-a12e-b751bdd88ba9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 18:03:27 addons-142951 crio[1030]: time="2024-08-19 18:03:27.868855286Z" level=info msg="Removed pod sandbox: 93712d78564861d8f3524169666e2ba81bb13bdff48583f6d5e3b3a31fe44961" id=ec53655a-d5e3-4d92-a12e-b751bdd88ba9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 18:03:27 addons-142951 crio[1030]: time="2024-08-19 18:03:27.869252854Z" level=info msg="Stopping pod sandbox: 8e99b220ce93a9c72be8b7aadd4059ae49cb5879876f35ec0546b39464f90427" id=8d50fbf5-0536-42f8-9ca2-78f2582b3f4f name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:03:27 addons-142951 crio[1030]: time="2024-08-19 18:03:27.869282422Z" level=info msg="Stopped pod sandbox (already stopped): 8e99b220ce93a9c72be8b7aadd4059ae49cb5879876f35ec0546b39464f90427" id=8d50fbf5-0536-42f8-9ca2-78f2582b3f4f name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:03:27 addons-142951 crio[1030]: time="2024-08-19 18:03:27.869557150Z" level=info msg="Removing pod sandbox: 8e99b220ce93a9c72be8b7aadd4059ae49cb5879876f35ec0546b39464f90427" id=976c36e0-7603-4ea2-85aa-2e554a1f9c1f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 18:03:27 addons-142951 crio[1030]: time="2024-08-19 18:03:27.875381782Z" level=info msg="Removed pod sandbox: 8e99b220ce93a9c72be8b7aadd4059ae49cb5879876f35ec0546b39464f90427" id=976c36e0-7603-4ea2-85aa-2e554a1f9c1f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 18:03:27 addons-142951 crio[1030]: time="2024-08-19 18:03:27.875674982Z" level=info msg="Stopping pod sandbox: a02a29c77f03993ce257dc72f05cb470c5b25ff265d31aeb8bc85d0cf6591baf" id=c0ab8a3d-ba29-404d-adfe-f1c97aff373a name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:03:27 addons-142951 crio[1030]: time="2024-08-19 18:03:27.875708107Z" level=info msg="Stopped pod sandbox (already stopped): a02a29c77f03993ce257dc72f05cb470c5b25ff265d31aeb8bc85d0cf6591baf" id=c0ab8a3d-ba29-404d-adfe-f1c97aff373a name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 19 18:03:27 addons-142951 crio[1030]: time="2024-08-19 18:03:27.875962907Z" level=info msg="Removing pod sandbox: a02a29c77f03993ce257dc72f05cb470c5b25ff265d31aeb8bc85d0cf6591baf" id=0073cc7d-f72b-4a2a-a1b5-4f5ce2159f45 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 18:03:27 addons-142951 crio[1030]: time="2024-08-19 18:03:27.881409515Z" level=info msg="Removed pod sandbox: a02a29c77f03993ce257dc72f05cb470c5b25ff265d31aeb8bc85d0cf6591baf" id=0073cc7d-f72b-4a2a-a1b5-4f5ce2159f45 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Aug 19 18:04:52 addons-142951 crio[1030]: time="2024-08-19 18:04:52.613574555Z" level=info msg="Stopping container: 1faa13290108646bf0755f76ca1e026ec198376c4a6d2f7f7f638a3e53027e89 (timeout: 30s)" id=78d8fe8e-1563-4f3f-840a-f4c1b1493c2f name=/runtime.v1.RuntimeService/StopContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8b2a4e6e93905       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   bac4fbb575a8b       hello-world-app-55bf9c44b4-pxt4b
	a9a83851c9eee       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                         4 minutes ago       Running             nginx                     0                   00d7d6838f8fb       nginx
	6240ede9cf582       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   f3567280887ba       busybox
	1faa132901086       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   6 minutes ago       Running             metrics-server            0                   a27b8af2ecf8e       metrics-server-8988944d9-hggkq
	1aa875218626f       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        6 minutes ago       Running             local-path-provisioner    0                   e0c608ee1cefd       local-path-provisioner-86d989889c-lr4nz
	41cc2fea90c47       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   1e757acdc6536       storage-provisioner
	bdd3e647d13fe       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   cfc61b5cb5a0b       coredns-6f6b679f8f-fc8vt
	f1d4a608c4ff6       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b                      7 minutes ago       Running             kindnet-cni               0                   3a761bb9d42a7       kindnet-v2xdp
	da12ebabc01a5       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        7 minutes ago       Running             kube-proxy                0                   d6058c2c1b050       kube-proxy-q94sk
	7858ffc81956b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        7 minutes ago       Running             kube-controller-manager   0                   0ba9d35fb0d2d       kube-controller-manager-addons-142951
	5bd5f680a9f96       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        7 minutes ago       Running             kube-apiserver            0                   2ec493c53f5ca       kube-apiserver-addons-142951
	a43a7b5c45d60       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        7 minutes ago       Running             kube-scheduler            0                   5e197d657734a       kube-scheduler-addons-142951
	bad09b5f8d830       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago       Running             etcd                      0                   6dcfa81a1bfdf       etcd-addons-142951
	
	
	==> coredns [bdd3e647d13feb1d0b2cf8e6911cb258c4c82982a0ccfd132c6803e879f11ff8] <==
	[INFO] 10.244.0.18:41099 - 13528 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000087104s
	[INFO] 10.244.0.18:34639 - 26503 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005034601s
	[INFO] 10.244.0.18:34639 - 18171 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005828257s
	[INFO] 10.244.0.18:53556 - 31976 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004574803s
	[INFO] 10.244.0.18:53556 - 47083 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004782343s
	[INFO] 10.244.0.18:48640 - 17596 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003592381s
	[INFO] 10.244.0.18:48640 - 21177 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.00527288s
	[INFO] 10.244.0.18:53493 - 56716 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000062911s
	[INFO] 10.244.0.18:53493 - 21896 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000088166s
	[INFO] 10.244.0.21:42460 - 18099 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000178422s
	[INFO] 10.244.0.21:40118 - 21161 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000255157s
	[INFO] 10.244.0.21:36980 - 5595 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000122751s
	[INFO] 10.244.0.21:39783 - 39206 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00017752s
	[INFO] 10.244.0.21:49330 - 10245 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000086237s
	[INFO] 10.244.0.21:53682 - 8635 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117262s
	[INFO] 10.244.0.21:34287 - 20837 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.005155218s
	[INFO] 10.244.0.21:55193 - 6962 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.005241981s
	[INFO] 10.244.0.21:40647 - 44533 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004754668s
	[INFO] 10.244.0.21:49338 - 18777 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005432948s
	[INFO] 10.244.0.21:41251 - 37701 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004544251s
	[INFO] 10.244.0.21:60483 - 40387 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004485269s
	[INFO] 10.244.0.21:51921 - 39378 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000706804s
	[INFO] 10.244.0.21:35130 - 14324 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000814019s
	[INFO] 10.244.0.24:42753 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000232013s
	[INFO] 10.244.0.24:36499 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000148211s
	
	
	==> describe nodes <==
	Name:               addons-142951
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-142951
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=addons-142951
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T17_57_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-142951
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:57:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-142951
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:04:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:03:04 +0000   Mon, 19 Aug 2024 17:57:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:03:04 +0000   Mon, 19 Aug 2024 17:57:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:03:04 +0000   Mon, 19 Aug 2024 17:57:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:03:04 +0000   Mon, 19 Aug 2024 17:57:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-142951
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac13505620b442b6bb748f645ef91266
	  System UUID:                a1df2279-d565-4b6c-bce8-72ba674e5fd0
	  Boot ID:                    78fba809-e96d-46e8-9b80-0c45215ddcd4
	  Kernel Version:             5.15.0-1066-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  default                     hello-world-app-55bf9c44b4-pxt4b           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 coredns-6f6b679f8f-fc8vt                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m20s
	  kube-system                 etcd-addons-142951                         100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m26s
	  kube-system                 kindnet-v2xdp                              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m20s
	  kube-system                 kube-apiserver-addons-142951               250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 kube-controller-manager-addons-142951      200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 kube-proxy-q94sk                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m20s
	  kube-system                 kube-scheduler-addons-142951               100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 metrics-server-8988944d9-hggkq             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         7m16s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m16s
	  local-path-storage          local-path-provisioner-86d989889c-lr4nz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 7m16s  kube-proxy       
	  Normal   Starting                 7m26s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m26s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m26s  kubelet          Node addons-142951 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m26s  kubelet          Node addons-142951 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m26s  kubelet          Node addons-142951 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m21s  node-controller  Node addons-142951 event: Registered Node addons-142951 in Controller
	  Normal   NodeReady                7m2s   kubelet          Node addons-142951 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000606] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000604] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000620] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000610] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.563404] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.048454] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.005566] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.011360] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002293] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.012653] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +7.070628] kauditd_printk_skb: 46 callbacks suppressed
	[Aug19 18:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 16 e1 04 f8 ee 69 3e 60 83 23 0d 27 08 00
	[  +1.020545] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 16 e1 04 f8 ee 69 3e 60 83 23 0d 27 08 00
	[  +2.015804] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 16 e1 04 f8 ee 69 3e 60 83 23 0d 27 08 00
	[  +4.031615] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 16 e1 04 f8 ee 69 3e 60 83 23 0d 27 08 00
	[  +8.191254] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 16 e1 04 f8 ee 69 3e 60 83 23 0d 27 08 00
	[Aug19 18:01] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 16 e1 04 f8 ee 69 3e 60 83 23 0d 27 08 00
	[ +33.528823] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 16 e1 04 f8 ee 69 3e 60 83 23 0d 27 08 00
	
	
	==> etcd [bad09b5f8d830a4450d74ca60e1225c5de9eefd6181bd8de98204f538eea030f] <==
	{"level":"info","ts":"2024-08-19T17:57:37.260780Z","caller":"traceutil/trace.go:171","msg":"trace[1468218613] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"184.04039ms","start":"2024-08-19T17:57:37.076726Z","end":"2024-08-19T17:57:37.260766Z","steps":["trace[1468218613] 'process raft request'  (duration: 182.899769ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:57:37.265778Z","caller":"traceutil/trace.go:171","msg":"trace[1602020941] linearizableReadLoop","detail":"{readStateIndex:465; appliedIndex:463; }","duration":"106.076241ms","start":"2024-08-19T17:57:37.159691Z","end":"2024-08-19T17:57:37.265767Z","steps":["trace[1602020941] 'read index received'  (duration: 101.222768ms)","trace[1602020941] 'applied index is now lower than readState.Index'  (duration: 4.852994ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T17:57:37.265885Z","caller":"traceutil/trace.go:171","msg":"trace[2127705441] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"102.352146ms","start":"2024-08-19T17:57:37.163451Z","end":"2024-08-19T17:57:37.265803Z","steps":["trace[2127705441] 'process raft request'  (duration: 102.142904ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:57:37.266100Z","caller":"traceutil/trace.go:171","msg":"trace[1010502430] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"106.639352ms","start":"2024-08-19T17:57:37.159451Z","end":"2024-08-19T17:57:37.266090Z","steps":["trace[1010502430] 'process raft request'  (duration: 105.94225ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:57:37.266291Z","caller":"traceutil/trace.go:171","msg":"trace[1349800991] transaction","detail":"{read_only:false; response_revision:453; number_of_response:1; }","duration":"106.641811ms","start":"2024-08-19T17:57:37.159641Z","end":"2024-08-19T17:57:37.266283Z","steps":["trace[1349800991] 'process raft request'  (duration: 105.873765ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:57:37.266467Z","caller":"traceutil/trace.go:171","msg":"trace[1516627403] transaction","detail":"{read_only:false; response_revision:454; number_of_response:1; }","duration":"106.63181ms","start":"2024-08-19T17:57:37.159826Z","end":"2024-08-19T17:57:37.266458Z","steps":["trace[1516627403] 'process raft request'  (duration: 105.736115ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:57:37.265905Z","caller":"traceutil/trace.go:171","msg":"trace[995492955] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"102.246326ms","start":"2024-08-19T17:57:37.163651Z","end":"2024-08-19T17:57:37.265897Z","steps":["trace[995492955] 'process raft request'  (duration: 101.96598ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:57:37.266299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.593549ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/tiller-deploy\" ","response":"range_response_count:1 size:4640"}
	{"level":"info","ts":"2024-08-19T17:57:37.266669Z","caller":"traceutil/trace.go:171","msg":"trace[1136239668] range","detail":"{range_begin:/registry/deployments/kube-system/tiller-deploy; range_end:; response_count:1; response_revision:460; }","duration":"106.968924ms","start":"2024-08-19T17:57:37.159689Z","end":"2024-08-19T17:57:37.266658Z","steps":["trace[1136239668] 'agreement among raft nodes before linearized reading'  (duration: 106.570965ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:57:37.268571Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.794583ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/tiller-clusterrolebinding\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:57:37.268614Z","caller":"traceutil/trace.go:171","msg":"trace[1245973959] range","detail":"{range_begin:/registry/clusterrolebindings/tiller-clusterrolebinding; range_end:; response_count:0; response_revision:461; }","duration":"102.849968ms","start":"2024-08-19T17:57:37.165755Z","end":"2024-08-19T17:57:37.268605Z","steps":["trace[1245973959] 'agreement among raft nodes before linearized reading'  (duration: 102.773798ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:57:37.268739Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.919406ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/yakd-dashboard/yakd-dashboard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-19T17:57:37.268768Z","caller":"traceutil/trace.go:171","msg":"trace[915306369] range","detail":"{range_begin:/registry/serviceaccounts/yakd-dashboard/yakd-dashboard; range_end:; response_count:0; response_revision:461; }","duration":"103.948723ms","start":"2024-08-19T17:57:37.164811Z","end":"2024-08-19T17:57:37.268760Z","steps":["trace[915306369] 'agreement among raft nodes before linearized reading'  (duration: 103.904279ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:57:37.268879Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.022192ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-08-19T17:57:37.268914Z","caller":"traceutil/trace.go:171","msg":"trace[1648820651] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:461; }","duration":"106.057458ms","start":"2024-08-19T17:57:37.162848Z","end":"2024-08-19T17:57:37.268905Z","steps":["trace[1648820651] 'agreement among raft nodes before linearized reading'  (duration: 106.001205ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:57:37.269032Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.941117ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-08-19T17:57:37.269063Z","caller":"traceutil/trace.go:171","msg":"trace[41264408] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:461; }","duration":"107.975286ms","start":"2024-08-19T17:57:37.161081Z","end":"2024-08-19T17:57:37.269056Z","steps":["trace[41264408] 'agreement among raft nodes before linearized reading'  (duration: 107.919777ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:57:39.161931Z","caller":"traceutil/trace.go:171","msg":"trace[1237943717] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"103.575554ms","start":"2024-08-19T17:57:39.058340Z","end":"2024-08-19T17:57:39.161915Z","steps":["trace[1237943717] 'process raft request'  (duration: 15.192953ms)","trace[1237943717] 'compare'  (duration: 87.833046ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-19T17:57:39.162087Z","caller":"traceutil/trace.go:171","msg":"trace[1152797025] transaction","detail":"{read_only:false; response_revision:611; number_of_response:1; }","duration":"103.362806ms","start":"2024-08-19T17:57:39.058716Z","end":"2024-08-19T17:57:39.162079Z","steps":["trace[1152797025] 'process raft request'  (duration: 102.774597ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:57:39.162256Z","caller":"traceutil/trace.go:171","msg":"trace[160442894] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"103.195904ms","start":"2024-08-19T17:57:39.059050Z","end":"2024-08-19T17:57:39.162245Z","steps":["trace[160442894] 'process raft request'  (duration: 102.482318ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-19T17:57:39.162376Z","caller":"traceutil/trace.go:171","msg":"trace[1696024948] linearizableReadLoop","detail":"{readStateIndex:626; appliedIndex:624; }","duration":"103.481477ms","start":"2024-08-19T17:57:39.058888Z","end":"2024-08-19T17:57:39.162369Z","steps":["trace[1696024948] 'read index received'  (duration: 14.505194ms)","trace[1696024948] 'applied index is now lower than readState.Index'  (duration: 88.974533ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-19T17:57:39.162514Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.611778ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/ingress-nginx\" ","response":"range_response_count:1 size:849"}
	{"level":"info","ts":"2024-08-19T17:57:39.162547Z","caller":"traceutil/trace.go:171","msg":"trace[718473624] range","detail":"{range_begin:/registry/namespaces/ingress-nginx; range_end:; response_count:1; response_revision:614; }","duration":"103.655227ms","start":"2024-08-19T17:57:39.058884Z","end":"2024-08-19T17:57:39.162539Z","steps":["trace[718473624] 'agreement among raft nodes before linearized reading'  (duration: 103.54972ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T17:57:39.163309Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.949158ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/job-controller\" ","response":"range_response_count:1 size:206"}
	{"level":"info","ts":"2024-08-19T17:57:39.163407Z","caller":"traceutil/trace.go:171","msg":"trace[876524858] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/job-controller; range_end:; response_count:1; response_revision:615; }","duration":"102.050918ms","start":"2024-08-19T17:57:39.061345Z","end":"2024-08-19T17:57:39.163396Z","steps":["trace[876524858] 'agreement among raft nodes before linearized reading'  (duration: 101.921515ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:04:53 up  1:47,  0 users,  load average: 0.03, 0.42, 0.30
	Linux addons-142951 5.15.0-1066-gcp #74~20.04.1-Ubuntu SMP Fri Jul 26 09:28:41 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f1d4a608c4ff658d5c5fb88d1c4e052cba1c33ad78d07462867111b08e5bcecf] <==
	E0819 18:03:41.429735       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 18:03:50.994520       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:03:50.994553       1 main.go:299] handling current node
	I0819 18:04:00.994850       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:04:00.994884       1 main.go:299] handling current node
	W0819 18:04:01.932849       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 18:04:01.932879       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 18:04:10.994132       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:04:10.994168       1 main.go:299] handling current node
	W0819 18:04:12.204554       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 18:04:12.204583       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 18:04:20.995061       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:04:20.995104       1 main.go:299] handling current node
	I0819 18:04:30.994748       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:04:30.994787       1 main.go:299] handling current node
	W0819 18:04:31.011286       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 18:04:31.011320       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 18:04:40.994249       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:04:40.994278       1 main.go:299] handling current node
	W0819 18:04:43.658841       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 18:04:43.658870       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 18:04:48.391391       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 18:04:48.391423       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 18:04:50.994297       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:04:50.994330       1 main.go:299] handling current node
	
	
	==> kube-apiserver [5bd5f680a9f9665f4bc003564f3f7c8ea5a83618a2352d05b89cc782c905b350] <==
	E0819 17:59:31.579693       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.83.110:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.83.110:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.83.110:443: connect: connection refused" logger="UnhandledError"
	I0819 17:59:31.612141       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0819 17:59:49.159408       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59696: use of closed network connection
	E0819 17:59:49.309088       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:59720: use of closed network connection
	I0819 18:00:03.938448       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0819 18:00:04.960820       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0819 18:00:14.830048       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0819 18:00:16.001854       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.26:57418: read: connection reset by peer
	I0819 18:00:28.951604       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0819 18:00:29.167228       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.213.191"}
	I0819 18:00:31.505238       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.197.187"}
	I0819 18:00:49.865797       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 18:00:49.865842       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 18:00:49.878567       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 18:00:49.878725       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 18:00:49.881171       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 18:00:49.881287       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 18:00:49.963301       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 18:00:49.963422       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0819 18:00:49.979476       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0819 18:00:49.979515       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0819 18:00:50.881049       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0819 18:00:50.980577       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0819 18:00:50.986314       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0819 18:02:48.181287       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.105.25"}
	
	
	==> kube-controller-manager [7858ffc81956bd52f9b05c4c8915ca1010f03ca1211b9d77511d4e457e785664] <==
	I0819 18:02:59.892457       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I0819 18:03:04.713014       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-142951"
	W0819 18:03:07.991720       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:03:07.991759       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:03:08.055632       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:03:08.055670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:03:25.438341       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:03:25.438379       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:03:32.675164       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:03:32.675210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:03:47.905211       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:03:47.905245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:03:54.391132       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:03:54.391176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:04:08.236984       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:04:08.237019       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:04:19.498036       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:04:19.498081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:04:28.271947       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:04:28.271992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:04:37.263754       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:04:37.263795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0819 18:04:49.569017       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0819 18:04:49.569053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0819 18:04:52.604848       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="4.335µs"
	
	
	==> kube-proxy [da12ebabc01a52ed8f21f64f721368a13ed52e61b98ae9f330737ee818e4ba97] <==
	I0819 17:57:36.363675       1 server_linux.go:66] "Using iptables proxy"
	I0819 17:57:37.276472       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0819 17:57:37.276538       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:57:37.560337       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0819 17:57:37.560468       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:57:37.673720       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:57:37.676266       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:57:37.676418       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:57:37.678156       1 config.go:197] "Starting service config controller"
	I0819 17:57:37.679639       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:57:37.678726       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:57:37.679671       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:57:37.679325       1 config.go:326] "Starting node config controller"
	I0819 17:57:37.679680       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:57:37.779937       1 shared_informer.go:320] Caches are synced for node config
	I0819 17:57:37.780009       1 shared_informer.go:320] Caches are synced for service config
	I0819 17:57:37.780043       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a43a7b5c45d60b44e35ead28284225f05e9ef9bd274ac5636431a2eaea0f097c] <==
	E0819 17:57:25.382431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0819 17:57:25.382447       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:57:25.382568       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 17:57:25.382594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:57:25.382621       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0819 17:57:25.382629       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 17:57:25.382646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0819 17:57:25.382647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:57:25.382662       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0819 17:57:25.382771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 17:57:25.382776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0819 17:57:25.382791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:57:25.382682       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 17:57:25.382816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:57:26.225317       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 17:57:26.225354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:57:26.239686       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 17:57:26.239716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:57:26.307124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 17:57:26.307173       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:57:26.320392       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 17:57:26.320429       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 17:57:26.433418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 17:57:26.433460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0819 17:57:28.880589       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 18:03:27 addons-142951 kubelet[1633]: E0819 18:03:27.736877    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090607736699268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:03:27 addons-142951 kubelet[1633]: E0819 18:03:27.736909    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090607736699268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:03:27 addons-142951 kubelet[1633]: I0819 18:03:27.827562    1633 scope.go:117] "RemoveContainer" containerID="c31e1c0337f3bacfa2d1a5a6115acc811fdd591cccc2072ee563cff81fc08017"
	Aug 19 18:03:27 addons-142951 kubelet[1633]: I0819 18:03:27.840778    1633 scope.go:117] "RemoveContainer" containerID="1fc3551382099c36139a5911b7fb04b6b336903035ba1e480c8362bb70c40f9f"
	Aug 19 18:03:37 addons-142951 kubelet[1633]: E0819 18:03:37.739561    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090617739394692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:03:37 addons-142951 kubelet[1633]: E0819 18:03:37.739591    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090617739394692,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:03:47 addons-142951 kubelet[1633]: E0819 18:03:47.741571    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090627741398070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:03:47 addons-142951 kubelet[1633]: E0819 18:03:47.741600    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090627741398070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:03:57 addons-142951 kubelet[1633]: E0819 18:03:57.743671    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090637743457936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:03:57 addons-142951 kubelet[1633]: E0819 18:03:57.743711    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090637743457936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:03:59 addons-142951 kubelet[1633]: I0819 18:03:59.680969    1633 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-6f6b679f8f-fc8vt" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 18:04:07 addons-142951 kubelet[1633]: E0819 18:04:07.746393    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090647746216703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:04:07 addons-142951 kubelet[1633]: E0819 18:04:07.746424    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090647746216703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:04:17 addons-142951 kubelet[1633]: E0819 18:04:17.749438    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090657749228636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:04:17 addons-142951 kubelet[1633]: E0819 18:04:17.749485    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090657749228636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:04:27 addons-142951 kubelet[1633]: E0819 18:04:27.752276    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090667752067106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:04:27 addons-142951 kubelet[1633]: E0819 18:04:27.752309    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090667752067106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:04:37 addons-142951 kubelet[1633]: E0819 18:04:37.754844    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090677754629583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:04:37 addons-142951 kubelet[1633]: E0819 18:04:37.754876    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090677754629583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:04:47 addons-142951 kubelet[1633]: E0819 18:04:47.756804    1633 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090687756640186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:04:47 addons-142951 kubelet[1633]: E0819 18:04:47.756835    1633 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724090687756640186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:616564,},InodesUsed:&UInt64Value{Value:247,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:04:53 addons-142951 kubelet[1633]: I0819 18:04:53.947208    1633 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0dca4d1b-5042-4c63-b3e2-04f12c5f19a8-tmp-dir\") pod \"0dca4d1b-5042-4c63-b3e2-04f12c5f19a8\" (UID: \"0dca4d1b-5042-4c63-b3e2-04f12c5f19a8\") "
	Aug 19 18:04:53 addons-142951 kubelet[1633]: I0819 18:04:53.947270    1633 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxqkd\" (UniqueName: \"kubernetes.io/projected/0dca4d1b-5042-4c63-b3e2-04f12c5f19a8-kube-api-access-lxqkd\") pod \"0dca4d1b-5042-4c63-b3e2-04f12c5f19a8\" (UID: \"0dca4d1b-5042-4c63-b3e2-04f12c5f19a8\") "
	Aug 19 18:04:53 addons-142951 kubelet[1633]: I0819 18:04:53.947372    1633 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dca4d1b-5042-4c63-b3e2-04f12c5f19a8-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "0dca4d1b-5042-4c63-b3e2-04f12c5f19a8" (UID: "0dca4d1b-5042-4c63-b3e2-04f12c5f19a8"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 19 18:04:53 addons-142951 kubelet[1633]: I0819 18:04:53.948879    1633 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dca4d1b-5042-4c63-b3e2-04f12c5f19a8-kube-api-access-lxqkd" (OuterVolumeSpecName: "kube-api-access-lxqkd") pod "0dca4d1b-5042-4c63-b3e2-04f12c5f19a8" (UID: "0dca4d1b-5042-4c63-b3e2-04f12c5f19a8"). InnerVolumeSpecName "kube-api-access-lxqkd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	
	
	==> storage-provisioner [41cc2fea90c47a7827759dee3094c7a22d3951da0957eac497b9ee9cfdf70ac6] <==
	I0819 17:57:52.064040       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 17:57:52.070925       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 17:57:52.070979       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 17:57:52.078052       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 17:57:52.078257       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-142951_c559a01b-a10e-4b95-88f0-1d537cbdbbf2!
	I0819 17:57:52.078525       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe4283e4-2495-42de-8646-1972a4e1b497", APIVersion:"v1", ResourceVersion:"910", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-142951_c559a01b-a10e-4b95-88f0-1d537cbdbbf2 became leader
	I0819 17:57:52.179276       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-142951_c559a01b-a10e-4b95-88f0-1d537cbdbbf2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-142951 -n addons-142951
helpers_test.go:261: (dbg) Run:  kubectl --context addons-142951 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (296.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (124.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-896148 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0819 18:15:09.771586   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:15:13.485737   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-896148 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m1.095906451s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:589: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-896148       NotReady   control-plane   8m33s   v1.31.0
	ha-896148-m02   Ready      control-plane   8m14s   v1.31.0
	ha-896148-m04   Ready      <none>          6m59s   v1.31.0

                                                
                                                
-- /stdout --
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-896148
helpers_test.go:235: (dbg) docker inspect ha-896148:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9492a2c00d6808671da2ea3fbbe3dc36f0775f97f86abc0b4a9e1601c80a4a91",
	        "Created": "2024-08-19T18:08:15.641686364Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 112862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T18:15:00.050261162Z",
	            "FinishedAt": "2024-08-19T18:14:59.325809192Z"
	        },
	        "Image": "sha256:197224e1b90979b98de246567852a03b60e3aa31dcd0de02a456282118daeb84",
	        "ResolvConfPath": "/var/lib/docker/containers/9492a2c00d6808671da2ea3fbbe3dc36f0775f97f86abc0b4a9e1601c80a4a91/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9492a2c00d6808671da2ea3fbbe3dc36f0775f97f86abc0b4a9e1601c80a4a91/hostname",
	        "HostsPath": "/var/lib/docker/containers/9492a2c00d6808671da2ea3fbbe3dc36f0775f97f86abc0b4a9e1601c80a4a91/hosts",
	        "LogPath": "/var/lib/docker/containers/9492a2c00d6808671da2ea3fbbe3dc36f0775f97f86abc0b4a9e1601c80a4a91/9492a2c00d6808671da2ea3fbbe3dc36f0775f97f86abc0b4a9e1601c80a4a91-json.log",
	        "Name": "/ha-896148",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ha-896148:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-896148",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/07b35f44df4b52c440174b8c0a77789bb34be18015e6634c0b44f0cc4d3b0c03-init/diff:/var/lib/docker/overlay2/0c2c9fdec01bef3a098fb8513a31b324e686eebb183f0aaad2be170703b9d191/diff",
	                "MergedDir": "/var/lib/docker/overlay2/07b35f44df4b52c440174b8c0a77789bb34be18015e6634c0b44f0cc4d3b0c03/merged",
	                "UpperDir": "/var/lib/docker/overlay2/07b35f44df4b52c440174b8c0a77789bb34be18015e6634c0b44f0cc4d3b0c03/diff",
	                "WorkDir": "/var/lib/docker/overlay2/07b35f44df4b52c440174b8c0a77789bb34be18015e6634c0b44f0cc4d3b0c03/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-896148",
	                "Source": "/var/lib/docker/volumes/ha-896148/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-896148",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-896148",
	                "name.minikube.sigs.k8s.io": "ha-896148",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7545797e8949570b46233a01d105dba39610accecf7a49143c2b99053ac74567",
	            "SandboxKey": "/var/run/docker/netns/7545797e8949",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-896148": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "066a0325646546aa1b150e9e472f0d56f7efbfde3aeed0f1be866b72eab7ac12",
	                    "EndpointID": "74893dcad29d4818226a54777075c66f04cf190a7088110f82ed2c714c1bbf82",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-896148",
	                        "9492a2c00d68"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-896148 -n ha-896148
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-896148 logs -n 25: (1.497858782s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-896148 cp ha-896148-m03:/home/docker/cp-test.txt                              | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:10 UTC | 19 Aug 24 18:10 UTC |
	|         | ha-896148-m04:/home/docker/cp-test_ha-896148-m03_ha-896148-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-896148 ssh -n                                                                 | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:10 UTC | 19 Aug 24 18:10 UTC |
	|         | ha-896148-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-896148 ssh -n ha-896148-m04 sudo cat                                          | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:10 UTC | 19 Aug 24 18:10 UTC |
	|         | /home/docker/cp-test_ha-896148-m03_ha-896148-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-896148 cp testdata/cp-test.txt                                                | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:10 UTC | 19 Aug 24 18:10 UTC |
	|         | ha-896148-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-896148 ssh -n                                                                 | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:10 UTC | 19 Aug 24 18:10 UTC |
	|         | ha-896148-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-896148 cp ha-896148-m04:/home/docker/cp-test.txt                              | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:10 UTC | 19 Aug 24 18:10 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1081370510/001/cp-test_ha-896148-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-896148 ssh -n                                                                 | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:10 UTC | 19 Aug 24 18:10 UTC |
	|         | ha-896148-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-896148 cp ha-896148-m04:/home/docker/cp-test.txt                              | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:10 UTC | 19 Aug 24 18:10 UTC |
	|         | ha-896148:/home/docker/cp-test_ha-896148-m04_ha-896148.txt                       |           |         |         |                     |                     |
	| ssh     | ha-896148 ssh -n                                                                 | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:10 UTC | 19 Aug 24 18:10 UTC |
	|         | ha-896148-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-896148 ssh -n ha-896148 sudo cat                                              | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:10 UTC | 19 Aug 24 18:10 UTC |
	|         | /home/docker/cp-test_ha-896148-m04_ha-896148.txt                                 |           |         |         |                     |                     |
	| cp      | ha-896148 cp ha-896148-m04:/home/docker/cp-test.txt                              | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:10 UTC | 19 Aug 24 18:10 UTC |
	|         | ha-896148-m02:/home/docker/cp-test_ha-896148-m04_ha-896148-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-896148 ssh -n                                                                 | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:10 UTC | 19 Aug 24 18:10 UTC |
	|         | ha-896148-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-896148 ssh -n ha-896148-m02 sudo cat                                          | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:10 UTC | 19 Aug 24 18:10 UTC |
	|         | /home/docker/cp-test_ha-896148-m04_ha-896148-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-896148 cp ha-896148-m04:/home/docker/cp-test.txt                              | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:10 UTC | 19 Aug 24 18:10 UTC |
	|         | ha-896148-m03:/home/docker/cp-test_ha-896148-m04_ha-896148-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-896148 ssh -n                                                                 | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:10 UTC | 19 Aug 24 18:10 UTC |
	|         | ha-896148-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-896148 ssh -n ha-896148-m03 sudo cat                                          | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:10 UTC | 19 Aug 24 18:10 UTC |
	|         | /home/docker/cp-test_ha-896148-m04_ha-896148-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-896148 node stop m02 -v=7                                                     | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:10 UTC | 19 Aug 24 18:10 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-896148 node start m02 -v=7                                                    | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:10 UTC | 19 Aug 24 18:11 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-896148 -v=7                                                           | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-896148 -v=7                                                                | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:11 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-896148 --wait=true -v=7                                                    | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:11 UTC | 19 Aug 24 18:14 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-896148                                                                | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:14 UTC |                     |
	| node    | ha-896148 node delete m03 -v=7                                                   | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:14 UTC | 19 Aug 24 18:14 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-896148 stop -v=7                                                              | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:14 UTC | 19 Aug 24 18:14 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-896148 --wait=true                                                         | ha-896148 | jenkins | v1.33.1 | 19 Aug 24 18:14 UTC | 19 Aug 24 18:17 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 18:14:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 18:14:59.728086  112560 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:14:59.728207  112560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:14:59.728217  112560 out.go:358] Setting ErrFile to fd 2...
	I0819 18:14:59.728221  112560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:14:59.728389  112560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-24160/.minikube/bin
	I0819 18:14:59.728928  112560 out.go:352] Setting JSON to false
	I0819 18:14:59.729852  112560 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7050,"bootTime":1724084250,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:14:59.729902  112560 start.go:139] virtualization: kvm guest
	I0819 18:14:59.732192  112560 out.go:177] * [ha-896148] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:14:59.733499  112560 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 18:14:59.733562  112560 notify.go:220] Checking for updates...
	I0819 18:14:59.736015  112560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:14:59.737217  112560 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-24160/kubeconfig
	I0819 18:14:59.738497  112560 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-24160/.minikube
	I0819 18:14:59.739743  112560 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:14:59.740928  112560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:14:59.742459  112560 config.go:182] Loaded profile config "ha-896148": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:14:59.742918  112560 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:14:59.764766  112560 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 18:14:59.764912  112560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 18:14:59.810699  112560 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:42 SystemTime:2024-08-19 18:14:59.800857841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 18:14:59.810805  112560 docker.go:307] overlay module found
	I0819 18:14:59.813017  112560 out.go:177] * Using the docker driver based on existing profile
	I0819 18:14:59.814283  112560 start.go:297] selected driver: docker
	I0819 18:14:59.814303  112560 start.go:901] validating driver "docker" against &{Name:ha-896148 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-896148 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kub
evirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:14:59.814417  112560 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:14:59.814485  112560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 18:14:59.861522  112560 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:42 SystemTime:2024-08-19 18:14:59.853312768 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 18:14:59.862127  112560 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:14:59.862187  112560 cni.go:84] Creating CNI manager for ""
	I0819 18:14:59.862198  112560 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 18:14:59.862235  112560 start.go:340] cluster config:
	{Name:ha-896148 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-896148 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device
-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval
:1m0s}
	I0819 18:14:59.864355  112560 out.go:177] * Starting "ha-896148" primary control-plane node in "ha-896148" cluster
	I0819 18:14:59.865790  112560 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 18:14:59.867199  112560 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 18:14:59.868656  112560 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:14:59.868697  112560 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19468-24160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0819 18:14:59.868705  112560 cache.go:56] Caching tarball of preloaded images
	I0819 18:14:59.868760  112560 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 18:14:59.868769  112560 preload.go:172] Found /home/jenkins/minikube-integration/19468-24160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:14:59.868854  112560 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:14:59.869009  112560 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/config.json ...
	W0819 18:14:59.887414  112560 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d is of wrong architecture
	I0819 18:14:59.887429  112560 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 18:14:59.887512  112560 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 18:14:59.887526  112560 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 18:14:59.887532  112560 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 18:14:59.887541  112560 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 18:14:59.887550  112560 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 18:14:59.888694  112560 image.go:273] response: 
	I0819 18:14:59.936232  112560 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 18:14:59.936277  112560 cache.go:194] Successfully downloaded all kic artifacts
	I0819 18:14:59.936312  112560 start.go:360] acquireMachinesLock for ha-896148: {Name:mke1894ae00616414143fb195f7858b37140f4fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:14:59.936377  112560 start.go:364] duration metric: took 42.68µs to acquireMachinesLock for "ha-896148"
	I0819 18:14:59.936399  112560 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:14:59.936407  112560 fix.go:54] fixHost starting: 
	I0819 18:14:59.936703  112560 cli_runner.go:164] Run: docker container inspect ha-896148 --format={{.State.Status}}
	I0819 18:14:59.952647  112560 fix.go:112] recreateIfNeeded on ha-896148: state=Stopped err=<nil>
	W0819 18:14:59.952672  112560 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:14:59.954475  112560 out.go:177] * Restarting existing docker container for "ha-896148" ...
	I0819 18:14:59.955610  112560 cli_runner.go:164] Run: docker start ha-896148
	I0819 18:15:00.208337  112560 cli_runner.go:164] Run: docker container inspect ha-896148 --format={{.State.Status}}
	I0819 18:15:00.226408  112560 kic.go:430] container "ha-896148" state is running.
	I0819 18:15:00.226784  112560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896148
	I0819 18:15:00.245071  112560 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/config.json ...
	I0819 18:15:00.245343  112560 machine.go:93] provisionDockerMachine start ...
	I0819 18:15:00.245411  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148
	I0819 18:15:00.262899  112560 main.go:141] libmachine: Using SSH client type: native
	I0819 18:15:00.263136  112560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0819 18:15:00.263158  112560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:15:00.263732  112560 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42960->127.0.0.1:32828: read: connection reset by peer
	I0819 18:15:03.380236  112560 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-896148
	
	I0819 18:15:03.380264  112560 ubuntu.go:169] provisioning hostname "ha-896148"
	I0819 18:15:03.380325  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148
	I0819 18:15:03.396959  112560 main.go:141] libmachine: Using SSH client type: native
	I0819 18:15:03.397162  112560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0819 18:15:03.397176  112560 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-896148 && echo "ha-896148" | sudo tee /etc/hostname
	I0819 18:15:03.522770  112560 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-896148
	
	I0819 18:15:03.522877  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148
	I0819 18:15:03.539452  112560 main.go:141] libmachine: Using SSH client type: native
	I0819 18:15:03.539668  112560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0819 18:15:03.539689  112560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-896148' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-896148/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-896148' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:15:03.652972  112560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:15:03.653001  112560 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19468-24160/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-24160/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-24160/.minikube}
	I0819 18:15:03.653047  112560 ubuntu.go:177] setting up certificates
	I0819 18:15:03.653057  112560 provision.go:84] configureAuth start
	I0819 18:15:03.653102  112560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896148
	I0819 18:15:03.669002  112560 provision.go:143] copyHostCerts
	I0819 18:15:03.669037  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19468-24160/.minikube/key.pem
	I0819 18:15:03.669069  112560 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-24160/.minikube/key.pem, removing ...
	I0819 18:15:03.669077  112560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-24160/.minikube/key.pem
	I0819 18:15:03.669173  112560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-24160/.minikube/key.pem (1679 bytes)
	I0819 18:15:03.669272  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19468-24160/.minikube/ca.pem
	I0819 18:15:03.669297  112560 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-24160/.minikube/ca.pem, removing ...
	I0819 18:15:03.669307  112560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-24160/.minikube/ca.pem
	I0819 18:15:03.669346  112560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-24160/.minikube/ca.pem (1078 bytes)
	I0819 18:15:03.669408  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19468-24160/.minikube/cert.pem
	I0819 18:15:03.669432  112560 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-24160/.minikube/cert.pem, removing ...
	I0819 18:15:03.669449  112560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-24160/.minikube/cert.pem
	I0819 18:15:03.669484  112560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-24160/.minikube/cert.pem (1123 bytes)
	I0819 18:15:03.669548  112560 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca-key.pem org=jenkins.ha-896148 san=[127.0.0.1 192.168.49.2 ha-896148 localhost minikube]
	I0819 18:15:03.726504  112560 provision.go:177] copyRemoteCerts
	I0819 18:15:03.726558  112560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:15:03.726589  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148
	I0819 18:15:03.743198  112560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/ha-896148/id_rsa Username:docker}
	I0819 18:15:03.833384  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 18:15:03.833436  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 18:15:03.853936  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 18:15:03.854002  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0819 18:15:03.874183  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 18:15:03.874243  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 18:15:03.894244  112560 provision.go:87] duration metric: took 241.174747ms to configureAuth
	I0819 18:15:03.894269  112560 ubuntu.go:193] setting minikube options for container-runtime
	I0819 18:15:03.894451  112560 config.go:182] Loaded profile config "ha-896148": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:15:03.894540  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148
	I0819 18:15:03.910705  112560 main.go:141] libmachine: Using SSH client type: native
	I0819 18:15:03.910875  112560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0819 18:15:03.910892  112560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:15:04.213828  112560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:15:04.213854  112560 machine.go:96] duration metric: took 3.968494886s to provisionDockerMachine
	I0819 18:15:04.213866  112560 start.go:293] postStartSetup for "ha-896148" (driver="docker")
	I0819 18:15:04.213878  112560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:15:04.213944  112560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:15:04.213996  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148
	I0819 18:15:04.232051  112560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/ha-896148/id_rsa Username:docker}
	I0819 18:15:04.321279  112560 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:15:04.324096  112560 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 18:15:04.324125  112560 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 18:15:04.324133  112560 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 18:15:04.324139  112560 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 18:15:04.324148  112560 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-24160/.minikube/addons for local assets ...
	I0819 18:15:04.324187  112560 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-24160/.minikube/files for local assets ...
	I0819 18:15:04.324257  112560 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-24160/.minikube/files/etc/ssl/certs/309662.pem -> 309662.pem in /etc/ssl/certs
	I0819 18:15:04.324266  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/files/etc/ssl/certs/309662.pem -> /etc/ssl/certs/309662.pem
	I0819 18:15:04.324340  112560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:15:04.331759  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/files/etc/ssl/certs/309662.pem --> /etc/ssl/certs/309662.pem (1708 bytes)
	I0819 18:15:04.352095  112560 start.go:296] duration metric: took 138.218535ms for postStartSetup
	I0819 18:15:04.352162  112560 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:15:04.352206  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148
	I0819 18:15:04.368584  112560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/ha-896148/id_rsa Username:docker}
	I0819 18:15:04.457529  112560 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 18:15:04.461342  112560 fix.go:56] duration metric: took 4.524931522s for fixHost
	I0819 18:15:04.461362  112560 start.go:83] releasing machines lock for "ha-896148", held for 4.52497125s
	I0819 18:15:04.461423  112560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896148
	I0819 18:15:04.477741  112560 ssh_runner.go:195] Run: cat /version.json
	I0819 18:15:04.477786  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148
	I0819 18:15:04.477839  112560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:15:04.477912  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148
	I0819 18:15:04.494439  112560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/ha-896148/id_rsa Username:docker}
	I0819 18:15:04.494809  112560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/ha-896148/id_rsa Username:docker}
	I0819 18:15:04.576091  112560 ssh_runner.go:195] Run: systemctl --version
	I0819 18:15:04.580032  112560 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:15:04.715696  112560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 18:15:04.719730  112560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:15:04.727391  112560 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0819 18:15:04.727447  112560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:15:04.734935  112560 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 18:15:04.734952  112560 start.go:495] detecting cgroup driver to use...
	I0819 18:15:04.734979  112560 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 18:15:04.735018  112560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:15:04.745260  112560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:15:04.754606  112560 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:15:04.754647  112560 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:15:04.765401  112560 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:15:04.774956  112560 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:15:04.843592  112560 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:15:04.915142  112560 docker.go:233] disabling docker service ...
	I0819 18:15:04.915193  112560 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:15:04.925682  112560 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:15:04.934873  112560 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:15:05.008572  112560 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:15:05.080662  112560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:15:05.090532  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:15:05.104353  112560 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:15:05.104401  112560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:15:05.113068  112560 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:15:05.113114  112560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:15:05.121421  112560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:15:05.129720  112560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:15:05.137774  112560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:15:05.145443  112560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:15:05.153656  112560 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:15:05.161735  112560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:15:05.169937  112560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:15:05.176944  112560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:15:05.183977  112560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:15:05.260499  112560 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:15:05.364614  112560 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:15:05.364685  112560 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:15:05.367876  112560 start.go:563] Will wait 60s for crictl version
	I0819 18:15:05.367934  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:15:05.370833  112560 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:15:05.402789  112560 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0819 18:15:05.402856  112560 ssh_runner.go:195] Run: crio --version
	I0819 18:15:05.434988  112560 ssh_runner.go:195] Run: crio --version
	I0819 18:15:05.470491  112560 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0819 18:15:05.471664  112560 cli_runner.go:164] Run: docker network inspect ha-896148 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 18:15:05.487944  112560 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 18:15:05.491245  112560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:15:05.501364  112560 kubeadm.go:883] updating cluster {Name:ha-896148 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-896148 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false l
ogviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 18:15:05.501512  112560 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:15:05.501558  112560 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:15:05.541188  112560 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:15:05.541211  112560 crio.go:433] Images already preloaded, skipping extraction
	I0819 18:15:05.541262  112560 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 18:15:05.572379  112560 crio.go:514] all images are preloaded for cri-o runtime.
	I0819 18:15:05.572398  112560 cache_images.go:84] Images are preloaded, skipping loading
	I0819 18:15:05.572406  112560 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 crio true true} ...
	I0819 18:15:05.572505  112560 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-896148 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-896148 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:15:05.572564  112560 ssh_runner.go:195] Run: crio config
	I0819 18:15:05.612425  112560 cni.go:84] Creating CNI manager for ""
	I0819 18:15:05.612445  112560 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0819 18:15:05.612453  112560 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 18:15:05.612480  112560 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-896148 NodeName:ha-896148 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 18:15:05.612611  112560 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-896148"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 18:15:05.612631  112560 kube-vip.go:115] generating kube-vip config ...
	I0819 18:15:05.612663  112560 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0819 18:15:05.623389  112560 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:15:05.623480  112560 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 18:15:05.623525  112560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:15:05.630924  112560 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:15:05.630984  112560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0819 18:15:05.638399  112560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0819 18:15:05.653346  112560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:15:05.668227  112560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0819 18:15:05.682810  112560 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0819 18:15:05.697802  112560 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0819 18:15:05.700690  112560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:15:05.709955  112560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:15:05.780420  112560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:15:05.791992  112560 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148 for IP: 192.168.49.2
	I0819 18:15:05.792009  112560 certs.go:194] generating shared ca certs ...
	I0819 18:15:05.792022  112560 certs.go:226] acquiring lock for ca certs: {Name:mk29d2f357e66b5ff77917021423cbbf2fc2a40f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:15:05.792165  112560 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-24160/.minikube/ca.key
	I0819 18:15:05.792228  112560 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.key
	I0819 18:15:05.792240  112560 certs.go:256] generating profile certs ...
	I0819 18:15:05.792327  112560 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/client.key
	I0819 18:15:05.792356  112560 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/apiserver.key.ead79eef
	I0819 18:15:05.792395  112560 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/apiserver.crt.ead79eef with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0819 18:15:06.093440  112560 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/apiserver.crt.ead79eef ...
	I0819 18:15:06.093467  112560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/apiserver.crt.ead79eef: {Name:mk7a8f73dda8a4641ee4dc9d4fb05d57d66c11f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:15:06.093653  112560 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/apiserver.key.ead79eef ...
	I0819 18:15:06.093679  112560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/apiserver.key.ead79eef: {Name:mke344bdf375838aa2c719aa58729a1f3162fb8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:15:06.093784  112560 certs.go:381] copying /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/apiserver.crt.ead79eef -> /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/apiserver.crt
	I0819 18:15:06.093988  112560 certs.go:385] copying /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/apiserver.key.ead79eef -> /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/apiserver.key
	I0819 18:15:06.094177  112560 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/proxy-client.key
	I0819 18:15:06.094223  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 18:15:06.094251  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 18:15:06.094269  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 18:15:06.094284  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 18:15:06.094300  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 18:15:06.094320  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 18:15:06.094341  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 18:15:06.094358  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 18:15:06.094422  112560 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/30966.pem (1338 bytes)
	W0819 18:15:06.094464  112560 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-24160/.minikube/certs/30966_empty.pem, impossibly tiny 0 bytes
	I0819 18:15:06.094478  112560 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 18:15:06.094508  112560 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem (1078 bytes)
	I0819 18:15:06.094551  112560 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:15:06.094583  112560 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/key.pem (1679 bytes)
	I0819 18:15:06.094639  112560 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/files/etc/ssl/certs/309662.pem (1708 bytes)
	I0819 18:15:06.094680  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:15:06.094700  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/30966.pem -> /usr/share/ca-certificates/30966.pem
	I0819 18:15:06.094719  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/files/etc/ssl/certs/309662.pem -> /usr/share/ca-certificates/309662.pem
	I0819 18:15:06.095322  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:15:06.116761  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:15:06.137322  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:15:06.157098  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:15:06.176972  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 18:15:06.197385  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 18:15:06.217822  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:15:06.238724  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 18:15:06.259125  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:15:06.279372  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/certs/30966.pem --> /usr/share/ca-certificates/30966.pem (1338 bytes)
	I0819 18:15:06.299469  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/files/etc/ssl/certs/309662.pem --> /usr/share/ca-certificates/309662.pem (1708 bytes)
	I0819 18:15:06.319346  112560 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 18:15:06.334388  112560 ssh_runner.go:195] Run: openssl version
	I0819 18:15:06.339203  112560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:15:06.347745  112560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:15:06.350937  112560 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:15:06.350978  112560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:15:06.356866  112560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:15:06.364388  112560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30966.pem && ln -fs /usr/share/ca-certificates/30966.pem /etc/ssl/certs/30966.pem"
	I0819 18:15:06.372222  112560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/30966.pem
	I0819 18:15:06.375143  112560 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:05 /usr/share/ca-certificates/30966.pem
	I0819 18:15:06.375180  112560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30966.pem
	I0819 18:15:06.381070  112560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30966.pem /etc/ssl/certs/51391683.0"
	I0819 18:15:06.388574  112560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/309662.pem && ln -fs /usr/share/ca-certificates/309662.pem /etc/ssl/certs/309662.pem"
	I0819 18:15:06.396359  112560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309662.pem
	I0819 18:15:06.399193  112560 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:05 /usr/share/ca-certificates/309662.pem
	I0819 18:15:06.399231  112560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309662.pem
	I0819 18:15:06.405027  112560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/309662.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:15:06.412503  112560 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:15:06.415658  112560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 18:15:06.421318  112560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 18:15:06.426774  112560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 18:15:06.432461  112560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 18:15:06.438019  112560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 18:15:06.443552  112560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 18:15:06.449180  112560 kubeadm.go:392] StartCluster: {Name:ha-896148 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-896148 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logv
iewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:15:06.449294  112560 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0819 18:15:06.449332  112560 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 18:15:06.480371  112560 cri.go:89] found id: ""
	I0819 18:15:06.480424  112560 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 18:15:06.488112  112560 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 18:15:06.488132  112560 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 18:15:06.488176  112560 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 18:15:06.495418  112560 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:15:06.495819  112560 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-896148" does not appear in /home/jenkins/minikube-integration/19468-24160/kubeconfig
	I0819 18:15:06.495929  112560 kubeconfig.go:62] /home/jenkins/minikube-integration/19468-24160/kubeconfig needs updating (will repair): [kubeconfig missing "ha-896148" cluster setting kubeconfig missing "ha-896148" context setting]
	I0819 18:15:06.496173  112560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/kubeconfig: {Name:mk3fc9bc92b0be5459854fbe59603f93f92756ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:15:06.496517  112560 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19468-24160/kubeconfig
	I0819 18:15:06.496714  112560 kapi.go:59] client config for ha-896148: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/client.crt", KeyFile:"/home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/client.key", CAFile:"/home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0819 18:15:06.497097  112560 cert_rotation.go:140] Starting client certificate rotation controller
	I0819 18:15:06.497346  112560 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 18:15:06.504529  112560 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0819 18:15:06.504553  112560 kubeadm.go:597] duration metric: took 16.414483ms to restartPrimaryControlPlane
	I0819 18:15:06.504579  112560 kubeadm.go:394] duration metric: took 55.401451ms to StartCluster
	I0819 18:15:06.504601  112560 settings.go:142] acquiring lock: {Name:mkd30ec37009c3562b283392e8fb1c4131be31b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:15:06.504658  112560 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19468-24160/kubeconfig
	I0819 18:15:06.505109  112560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/kubeconfig: {Name:mk3fc9bc92b0be5459854fbe59603f93f92756ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:15:06.505313  112560 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:15:06.505345  112560 start.go:241] waiting for startup goroutines ...
	I0819 18:15:06.505365  112560 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 18:15:06.505526  112560 config.go:182] Loaded profile config "ha-896148": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:15:06.508365  112560 out.go:177] * Enabled addons: 
	I0819 18:15:06.509482  112560 addons.go:510] duration metric: took 4.121609ms for enable addons: enabled=[]
	I0819 18:15:06.509511  112560 start.go:246] waiting for cluster config update ...
	I0819 18:15:06.509522  112560 start.go:255] writing updated cluster config ...
	I0819 18:15:06.510822  112560 out.go:201] 
	I0819 18:15:06.512003  112560 config.go:182] Loaded profile config "ha-896148": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:15:06.512081  112560 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/config.json ...
	I0819 18:15:06.513469  112560 out.go:177] * Starting "ha-896148-m02" control-plane node in "ha-896148" cluster
	I0819 18:15:06.514428  112560 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 18:15:06.515507  112560 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 18:15:06.516478  112560 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:15:06.516497  112560 cache.go:56] Caching tarball of preloaded images
	I0819 18:15:06.516496  112560 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 18:15:06.516574  112560 preload.go:172] Found /home/jenkins/minikube-integration/19468-24160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:15:06.516590  112560 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:15:06.516672  112560 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/config.json ...
	W0819 18:15:06.534601  112560 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d is of wrong architecture
	I0819 18:15:06.534615  112560 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 18:15:06.534684  112560 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 18:15:06.534702  112560 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 18:15:06.534709  112560 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 18:15:06.534716  112560 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 18:15:06.534723  112560 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 18:15:06.535639  112560 image.go:273] response: 
	I0819 18:15:06.585299  112560 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 18:15:06.585336  112560 cache.go:194] Successfully downloaded all kic artifacts
	I0819 18:15:06.585364  112560 start.go:360] acquireMachinesLock for ha-896148-m02: {Name:mk55581823cd9940e243efb870753aafef5fa6fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:15:06.585419  112560 start.go:364] duration metric: took 37.987µs to acquireMachinesLock for "ha-896148-m02"
	I0819 18:15:06.585442  112560 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:15:06.585449  112560 fix.go:54] fixHost starting: m02
	I0819 18:15:06.585665  112560 cli_runner.go:164] Run: docker container inspect ha-896148-m02 --format={{.State.Status}}
	I0819 18:15:06.601901  112560 fix.go:112] recreateIfNeeded on ha-896148-m02: state=Stopped err=<nil>
	W0819 18:15:06.601931  112560 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:15:06.603710  112560 out.go:177] * Restarting existing docker container for "ha-896148-m02" ...
	I0819 18:15:06.604950  112560 cli_runner.go:164] Run: docker start ha-896148-m02
	I0819 18:15:06.855882  112560 cli_runner.go:164] Run: docker container inspect ha-896148-m02 --format={{.State.Status}}
	I0819 18:15:06.872647  112560 kic.go:430] container "ha-896148-m02" state is running.
	I0819 18:15:06.873018  112560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896148-m02
	I0819 18:15:06.890711  112560 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/config.json ...
	I0819 18:15:06.890917  112560 machine.go:93] provisionDockerMachine start ...
	I0819 18:15:06.890967  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148-m02
	I0819 18:15:06.907462  112560 main.go:141] libmachine: Using SSH client type: native
	I0819 18:15:06.907676  112560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0819 18:15:06.907693  112560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:15:06.908330  112560 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45958->127.0.0.1:32833: read: connection reset by peer
	I0819 18:15:10.028424  112560 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-896148-m02
	
	I0819 18:15:10.028453  112560 ubuntu.go:169] provisioning hostname "ha-896148-m02"
	I0819 18:15:10.028512  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148-m02
	I0819 18:15:10.045955  112560 main.go:141] libmachine: Using SSH client type: native
	I0819 18:15:10.046171  112560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0819 18:15:10.046190  112560 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-896148-m02 && echo "ha-896148-m02" | sudo tee /etc/hostname
	I0819 18:15:10.170561  112560 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-896148-m02
	
	I0819 18:15:10.170652  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148-m02
	I0819 18:15:10.187623  112560 main.go:141] libmachine: Using SSH client type: native
	I0819 18:15:10.187793  112560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0819 18:15:10.187810  112560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-896148-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-896148-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-896148-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:15:10.305001  112560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:15:10.305027  112560 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19468-24160/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-24160/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-24160/.minikube}
	I0819 18:15:10.305041  112560 ubuntu.go:177] setting up certificates
	I0819 18:15:10.305051  112560 provision.go:84] configureAuth start
	I0819 18:15:10.305106  112560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896148-m02
	I0819 18:15:10.322047  112560 provision.go:143] copyHostCerts
	I0819 18:15:10.322082  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19468-24160/.minikube/ca.pem
	I0819 18:15:10.322107  112560 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-24160/.minikube/ca.pem, removing ...
	I0819 18:15:10.322115  112560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-24160/.minikube/ca.pem
	I0819 18:15:10.322180  112560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-24160/.minikube/ca.pem (1078 bytes)
	I0819 18:15:10.322258  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19468-24160/.minikube/cert.pem
	I0819 18:15:10.322277  112560 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-24160/.minikube/cert.pem, removing ...
	I0819 18:15:10.322281  112560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-24160/.minikube/cert.pem
	I0819 18:15:10.322306  112560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-24160/.minikube/cert.pem (1123 bytes)
	I0819 18:15:10.322363  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19468-24160/.minikube/key.pem
	I0819 18:15:10.322380  112560 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-24160/.minikube/key.pem, removing ...
	I0819 18:15:10.322384  112560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-24160/.minikube/key.pem
	I0819 18:15:10.322416  112560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-24160/.minikube/key.pem (1679 bytes)
	I0819 18:15:10.322473  112560 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca-key.pem org=jenkins.ha-896148-m02 san=[127.0.0.1 192.168.49.3 ha-896148-m02 localhost minikube]
	I0819 18:15:10.540358  112560 provision.go:177] copyRemoteCerts
	I0819 18:15:10.540408  112560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:15:10.540438  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148-m02
	I0819 18:15:10.556896  112560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/ha-896148-m02/id_rsa Username:docker}
	I0819 18:15:10.645092  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 18:15:10.645180  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 18:15:10.665374  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 18:15:10.665441  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 18:15:10.685468  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 18:15:10.685516  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 18:15:10.705386  112560 provision.go:87] duration metric: took 400.3226ms to configureAuth
	I0819 18:15:10.705411  112560 ubuntu.go:193] setting minikube options for container-runtime
	I0819 18:15:10.705627  112560 config.go:182] Loaded profile config "ha-896148": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:15:10.705735  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148-m02
	I0819 18:15:10.722065  112560 main.go:141] libmachine: Using SSH client type: native
	I0819 18:15:10.722220  112560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0819 18:15:10.722236  112560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:15:11.036835  112560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:15:11.036857  112560 machine.go:96] duration metric: took 4.145928761s to provisionDockerMachine
	I0819 18:15:11.036868  112560 start.go:293] postStartSetup for "ha-896148-m02" (driver="docker")
	I0819 18:15:11.036878  112560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:15:11.036937  112560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:15:11.036974  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148-m02
	I0819 18:15:11.053075  112560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/ha-896148-m02/id_rsa Username:docker}
	I0819 18:15:11.141394  112560 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:15:11.144240  112560 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 18:15:11.144269  112560 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 18:15:11.144277  112560 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 18:15:11.144284  112560 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 18:15:11.144293  112560 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-24160/.minikube/addons for local assets ...
	I0819 18:15:11.144337  112560 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-24160/.minikube/files for local assets ...
	I0819 18:15:11.144400  112560 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-24160/.minikube/files/etc/ssl/certs/309662.pem -> 309662.pem in /etc/ssl/certs
	I0819 18:15:11.144409  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/files/etc/ssl/certs/309662.pem -> /etc/ssl/certs/309662.pem
	I0819 18:15:11.144481  112560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:15:11.151672  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/files/etc/ssl/certs/309662.pem --> /etc/ssl/certs/309662.pem (1708 bytes)
	I0819 18:15:11.171670  112560 start.go:296] duration metric: took 134.790244ms for postStartSetup
	I0819 18:15:11.171737  112560 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:15:11.171782  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148-m02
	I0819 18:15:11.189298  112560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/ha-896148-m02/id_rsa Username:docker}
	I0819 18:15:11.273692  112560 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 18:15:11.277692  112560 fix.go:56] duration metric: took 4.692238524s for fixHost
	I0819 18:15:11.277716  112560 start.go:83] releasing machines lock for "ha-896148-m02", held for 4.692286789s
	I0819 18:15:11.277771  112560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896148-m02
	I0819 18:15:11.295840  112560 out.go:177] * Found network options:
	I0819 18:15:11.297084  112560 out.go:177]   - NO_PROXY=192.168.49.2
	W0819 18:15:11.298216  112560 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 18:15:11.298245  112560 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 18:15:11.298311  112560 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:15:11.298344  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148-m02
	I0819 18:15:11.298399  112560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:15:11.298477  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148-m02
	I0819 18:15:11.315719  112560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/ha-896148-m02/id_rsa Username:docker}
	I0819 18:15:11.316983  112560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/ha-896148-m02/id_rsa Username:docker}
	I0819 18:15:11.534940  112560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 18:15:11.539649  112560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:15:11.564468  112560 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0819 18:15:11.564520  112560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:15:11.581402  112560 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 18:15:11.581425  112560 start.go:495] detecting cgroup driver to use...
	I0819 18:15:11.581454  112560 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 18:15:11.581500  112560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:15:11.666626  112560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:15:11.679018  112560 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:15:11.679079  112560 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:15:11.761199  112560 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:15:11.778507  112560 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:15:12.166044  112560 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:15:12.410983  112560 docker.go:233] disabling docker service ...
	I0819 18:15:12.411046  112560 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:15:12.474364  112560 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:15:12.494288  112560 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:15:12.800566  112560 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:15:13.102433  112560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:15:13.166979  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:15:13.187334  112560 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:15:13.187387  112560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:15:13.197959  112560 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:15:13.198017  112560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:15:13.260524  112560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:15:13.270440  112560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:15:13.279692  112560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:15:13.288192  112560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:15:13.299897  112560 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:15:13.367838  112560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:15:13.385965  112560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:15:13.458139  112560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:15:13.469412  112560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:15:13.709096  112560 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:15:15.068101  112560 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.358966971s)
	I0819 18:15:15.068123  112560 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:15:15.068160  112560 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:15:15.071654  112560 start.go:563] Will wait 60s for crictl version
	I0819 18:15:15.071707  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:15:15.074706  112560 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:15:15.113670  112560 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0819 18:15:15.113755  112560 ssh_runner.go:195] Run: crio --version
	I0819 18:15:15.147643  112560 ssh_runner.go:195] Run: crio --version
	I0819 18:15:15.186328  112560 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0819 18:15:15.187863  112560 out.go:177]   - env NO_PROXY=192.168.49.2
	I0819 18:15:15.189230  112560 cli_runner.go:164] Run: docker network inspect ha-896148 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 18:15:15.205254  112560 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 18:15:15.208502  112560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:15:15.218202  112560 mustload.go:65] Loading cluster: ha-896148
	I0819 18:15:15.218404  112560 config.go:182] Loaded profile config "ha-896148": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:15:15.218628  112560 cli_runner.go:164] Run: docker container inspect ha-896148 --format={{.State.Status}}
	I0819 18:15:15.234300  112560 host.go:66] Checking if "ha-896148" exists ...
	I0819 18:15:15.234519  112560 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148 for IP: 192.168.49.3
	I0819 18:15:15.234541  112560 certs.go:194] generating shared ca certs ...
	I0819 18:15:15.234553  112560 certs.go:226] acquiring lock for ca certs: {Name:mk29d2f357e66b5ff77917021423cbbf2fc2a40f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:15:15.234654  112560 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-24160/.minikube/ca.key
	I0819 18:15:15.234692  112560 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.key
	I0819 18:15:15.234702  112560 certs.go:256] generating profile certs ...
	I0819 18:15:15.234798  112560 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/client.key
	I0819 18:15:15.234861  112560 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/apiserver.key.5da3189a
	I0819 18:15:15.234895  112560 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/proxy-client.key
	I0819 18:15:15.234905  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 18:15:15.234916  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 18:15:15.234928  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 18:15:15.234938  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 18:15:15.234951  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0819 18:15:15.234964  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0819 18:15:15.234976  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0819 18:15:15.234988  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0819 18:15:15.235032  112560 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/30966.pem (1338 bytes)
	W0819 18:15:15.235062  112560 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-24160/.minikube/certs/30966_empty.pem, impossibly tiny 0 bytes
	I0819 18:15:15.235073  112560 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 18:15:15.235093  112560 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem (1078 bytes)
	I0819 18:15:15.235115  112560 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:15:15.235135  112560 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/key.pem (1679 bytes)
	I0819 18:15:15.235173  112560 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/files/etc/ssl/certs/309662.pem (1708 bytes)
	I0819 18:15:15.235198  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/files/etc/ssl/certs/309662.pem -> /usr/share/ca-certificates/309662.pem
	I0819 18:15:15.235213  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:15:15.235225  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/30966.pem -> /usr/share/ca-certificates/30966.pem
	I0819 18:15:15.235265  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148
	I0819 18:15:15.250583  112560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/ha-896148/id_rsa Username:docker}
	I0819 18:15:15.329362  112560 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0819 18:15:15.332701  112560 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0819 18:15:15.343903  112560 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0819 18:15:15.347147  112560 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0819 18:15:15.358290  112560 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0819 18:15:15.361289  112560 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0819 18:15:15.372416  112560 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0819 18:15:15.375443  112560 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0819 18:15:15.385964  112560 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0819 18:15:15.388737  112560 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0819 18:15:15.399464  112560 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0819 18:15:15.402299  112560 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0819 18:15:15.412680  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:15:15.433396  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:15:15.453800  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:15:15.474393  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:15:15.495648  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 18:15:15.516141  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 18:15:15.536740  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 18:15:15.558210  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 18:15:15.583811  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/files/etc/ssl/certs/309662.pem --> /usr/share/ca-certificates/309662.pem (1708 bytes)
	I0819 18:15:15.607063  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:15:15.630027  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/certs/30966.pem --> /usr/share/ca-certificates/30966.pem (1338 bytes)
	I0819 18:15:15.677165  112560 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0819 18:15:15.692648  112560 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0819 18:15:15.708106  112560 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0819 18:15:15.722830  112560 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0819 18:15:15.738184  112560 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0819 18:15:15.753533  112560 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0819 18:15:15.768470  112560 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0819 18:15:15.783487  112560 ssh_runner.go:195] Run: openssl version
	I0819 18:15:15.788015  112560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/309662.pem && ln -fs /usr/share/ca-certificates/309662.pem /etc/ssl/certs/309662.pem"
	I0819 18:15:15.795972  112560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309662.pem
	I0819 18:15:15.799126  112560 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:05 /usr/share/ca-certificates/309662.pem
	I0819 18:15:15.799166  112560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309662.pem
	I0819 18:15:15.805074  112560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/309662.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:15:15.812593  112560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:15:15.820475  112560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:15:15.823358  112560 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:15:15.823403  112560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:15:15.829262  112560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:15:15.836346  112560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30966.pem && ln -fs /usr/share/ca-certificates/30966.pem /etc/ssl/certs/30966.pem"
	I0819 18:15:15.844207  112560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/30966.pem
	I0819 18:15:15.846974  112560 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:05 /usr/share/ca-certificates/30966.pem
	I0819 18:15:15.847016  112560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30966.pem
	I0819 18:15:15.852798  112560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30966.pem /etc/ssl/certs/51391683.0"
	I0819 18:15:15.860386  112560 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:15:15.863367  112560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 18:15:15.869490  112560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 18:15:15.875627  112560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 18:15:15.881853  112560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 18:15:15.887555  112560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 18:15:15.893658  112560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 18:15:15.899842  112560 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.0 crio true true} ...
	I0819 18:15:15.899948  112560 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-896148-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-896148 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:15:15.899978  112560 kube-vip.go:115] generating kube-vip config ...
	I0819 18:15:15.900006  112560 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0819 18:15:15.910443  112560 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0819 18:15:15.910502  112560 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0819 18:15:15.910545  112560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:15:15.917818  112560 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:15:15.917858  112560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0819 18:15:15.925245  112560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0819 18:15:15.940542  112560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:15:15.955974  112560 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0819 18:15:15.971055  112560 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0819 18:15:15.973956  112560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:15:15.982907  112560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:15:16.073244  112560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:15:16.083959  112560 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0819 18:15:16.084277  112560 config.go:182] Loaded profile config "ha-896148": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:15:16.086155  112560 out.go:177] * Verifying Kubernetes components...
	I0819 18:15:16.087725  112560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:15:16.180292  112560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:15:16.191024  112560 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19468-24160/kubeconfig
	I0819 18:15:16.191225  112560 kapi.go:59] client config for ha-896148: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/client.crt", KeyFile:"/home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/client.key", CAFile:"/home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 18:15:16.191281  112560 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0819 18:15:16.191459  112560 node_ready.go:35] waiting up to 6m0s for node "ha-896148-m02" to be "Ready" ...
	I0819 18:15:16.191534  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:15:16.191548  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:16.191558  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:16.191564  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:27.906685  112560 round_trippers.go:574] Response Status: 500 Internal Server Error in 11715 milliseconds
	I0819 18:15:27.907584  112560 node_ready.go:53] error getting node "ha-896148-m02": etcdserver: request timed out
	I0819 18:15:27.907671  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:15:27.907683  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:27.907694  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:27.907701  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:32.278130  112560 round_trippers.go:574] Response Status: 500 Internal Server Error in 4370 milliseconds
	I0819 18:15:32.278428  112560 node_ready.go:53] error getting node "ha-896148-m02": etcdserver: leader changed
	I0819 18:15:32.278509  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:15:32.278548  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:32.278569  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:32.278593  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:32.360165  112560 round_trippers.go:574] Response Status: 200 OK in 81 milliseconds
	I0819 18:15:32.361204  112560 node_ready.go:49] node "ha-896148-m02" has status "Ready":"True"
	I0819 18:15:32.361226  112560 node_ready.go:38] duration metric: took 16.169753084s for node "ha-896148-m02" to be "Ready" ...
	I0819 18:15:32.361238  112560 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:15:32.361287  112560 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0819 18:15:32.361302  112560 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0819 18:15:32.361373  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0819 18:15:32.361380  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:32.361390  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:32.361395  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:32.370695  112560 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0819 18:15:32.379039  112560 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-htfhr" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:32.379134  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:15:32.379146  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:32.379156  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:32.379160  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:32.381031  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:15:32.381569  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:15:32.381585  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:32.381592  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:32.381595  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:32.383159  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:15:32.383502  112560 pod_ready.go:93] pod "coredns-6f6b679f8f-htfhr" in "kube-system" namespace has status "Ready":"True"
	I0819 18:15:32.383517  112560 pod_ready.go:82] duration metric: took 4.454966ms for pod "coredns-6f6b679f8f-htfhr" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:32.383525  112560 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zbfmw" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:32.383563  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-zbfmw
	I0819 18:15:32.383570  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:32.383577  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:32.383581  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:32.385105  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:15:32.385605  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:15:32.385619  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:32.385624  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:32.385629  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:32.387363  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:15:32.387756  112560 pod_ready.go:93] pod "coredns-6f6b679f8f-zbfmw" in "kube-system" namespace has status "Ready":"True"
	I0819 18:15:32.387779  112560 pod_ready.go:82] duration metric: took 4.242453ms for pod "coredns-6f6b679f8f-zbfmw" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:32.387791  112560 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-896148" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:32.387839  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896148
	I0819 18:15:32.387848  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:32.387858  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:32.387864  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:32.390460  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:32.390862  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:15:32.390875  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:32.390881  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:32.390886  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:32.392540  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:15:32.393049  112560 pod_ready.go:93] pod "etcd-ha-896148" in "kube-system" namespace has status "Ready":"True"
	I0819 18:15:32.393069  112560 pod_ready.go:82] duration metric: took 5.270809ms for pod "etcd-ha-896148" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:32.393080  112560 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-896148-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:32.393165  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896148-m02
	I0819 18:15:32.393177  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:32.393187  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:32.393193  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:32.394891  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:15:32.395332  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:15:32.395346  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:32.395355  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:32.395360  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:32.396966  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:15:32.397354  112560 pod_ready.go:93] pod "etcd-ha-896148-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:15:32.397370  112560 pod_ready.go:82] duration metric: took 4.284061ms for pod "etcd-ha-896148-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:32.397379  112560 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-896148-m03" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:32.478608  112560 request.go:632] Waited for 81.16002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896148-m03
	I0819 18:15:32.478701  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896148-m03
	I0819 18:15:32.478715  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:32.478724  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:32.478730  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:32.480861  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:32.678871  112560 request.go:632] Waited for 197.356378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148-m03
	I0819 18:15:32.678969  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m03
	I0819 18:15:32.678980  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:32.678991  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:32.679002  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:32.681418  112560 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 18:15:32.681540  112560 pod_ready.go:98] node "ha-896148-m03" hosting pod "etcd-ha-896148-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-896148-m03": nodes "ha-896148-m03" not found
	I0819 18:15:32.681556  112560 pod_ready.go:82] duration metric: took 284.170753ms for pod "etcd-ha-896148-m03" in "kube-system" namespace to be "Ready" ...
	E0819 18:15:32.681576  112560 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896148-m03" hosting pod "etcd-ha-896148-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-896148-m03": nodes "ha-896148-m03" not found
	I0819 18:15:32.681606  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-896148" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:32.879020  112560 request.go:632] Waited for 197.346126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896148
	I0819 18:15:32.879088  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896148
	I0819 18:15:32.879096  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:32.879108  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:32.879116  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:32.881531  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:33.079167  112560 request.go:632] Waited for 197.004939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:15:33.079224  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:15:33.079231  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:33.079242  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:33.079258  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:33.081794  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:33.082373  112560 pod_ready.go:93] pod "kube-apiserver-ha-896148" in "kube-system" namespace has status "Ready":"True"
	I0819 18:15:33.082397  112560 pod_ready.go:82] duration metric: took 400.77652ms for pod "kube-apiserver-ha-896148" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:33.082413  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-896148-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:33.279326  112560 request.go:632] Waited for 196.842363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896148-m02
	I0819 18:15:33.279393  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896148-m02
	I0819 18:15:33.279405  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:33.279415  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:33.279425  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:33.282081  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:33.479208  112560 request.go:632] Waited for 196.369737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:15:33.479278  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:15:33.479285  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:33.479299  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:33.479317  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:33.481266  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:15:33.481859  112560 pod_ready.go:93] pod "kube-apiserver-ha-896148-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:15:33.481883  112560 pod_ready.go:82] duration metric: took 399.461897ms for pod "kube-apiserver-ha-896148-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:33.481897  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-896148-m03" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:33.678805  112560 request.go:632] Waited for 196.821134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896148-m03
	I0819 18:15:33.678868  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896148-m03
	I0819 18:15:33.678875  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:33.678887  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:33.678893  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:33.682044  112560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:15:33.879113  112560 request.go:632] Waited for 196.405869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148-m03
	I0819 18:15:33.879232  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m03
	I0819 18:15:33.879268  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:33.879296  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:33.879314  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:33.883108  112560 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0819 18:15:33.883278  112560 pod_ready.go:98] node "ha-896148-m03" hosting pod "kube-apiserver-ha-896148-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-896148-m03": nodes "ha-896148-m03" not found
	I0819 18:15:33.883305  112560 pod_ready.go:82] duration metric: took 401.399432ms for pod "kube-apiserver-ha-896148-m03" in "kube-system" namespace to be "Ready" ...
	E0819 18:15:33.883326  112560 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896148-m03" hosting pod "kube-apiserver-ha-896148-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-896148-m03": nodes "ha-896148-m03" not found
	I0819 18:15:33.883342  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-896148" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:34.079616  112560 request.go:632] Waited for 196.185954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896148
	I0819 18:15:34.079682  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896148
	I0819 18:15:34.079688  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:34.079695  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:34.079699  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:34.082017  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:34.279061  112560 request.go:632] Waited for 196.34108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:15:34.279117  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:15:34.279122  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:34.279130  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:34.279138  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:34.281018  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:15:34.281480  112560 pod_ready.go:93] pod "kube-controller-manager-ha-896148" in "kube-system" namespace has status "Ready":"True"
	I0819 18:15:34.281500  112560 pod_ready.go:82] duration metric: took 398.144508ms for pod "kube-controller-manager-ha-896148" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:34.281509  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-896148-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:34.479483  112560 request.go:632] Waited for 197.901634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896148-m02
	I0819 18:15:34.479540  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896148-m02
	I0819 18:15:34.479546  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:34.479553  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:34.479558  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:34.482148  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:34.679090  112560 request.go:632] Waited for 196.349903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:15:34.679144  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:15:34.679149  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:34.679168  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:34.679174  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:34.681774  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:34.682270  112560 pod_ready.go:93] pod "kube-controller-manager-ha-896148-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:15:34.682289  112560 pod_ready.go:82] duration metric: took 400.774092ms for pod "kube-controller-manager-ha-896148-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:34.682299  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-896148-m03" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:34.879333  112560 request.go:632] Waited for 196.975754ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896148-m03
	I0819 18:15:34.879404  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896148-m03
	I0819 18:15:34.879412  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:34.879420  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:34.879424  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:34.881743  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:35.078609  112560 request.go:632] Waited for 196.294855ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148-m03
	I0819 18:15:35.078674  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m03
	I0819 18:15:35.078682  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:35.078692  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:35.078701  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:35.081061  112560 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 18:15:35.081215  112560 pod_ready.go:98] node "ha-896148-m03" hosting pod "kube-controller-manager-ha-896148-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-896148-m03": nodes "ha-896148-m03" not found
	I0819 18:15:35.081233  112560 pod_ready.go:82] duration metric: took 398.925272ms for pod "kube-controller-manager-ha-896148-m03" in "kube-system" namespace to be "Ready" ...
	E0819 18:15:35.081248  112560 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896148-m03" hosting pod "kube-controller-manager-ha-896148-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-896148-m03": nodes "ha-896148-m03" not found
	I0819 18:15:35.081261  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8xdhg" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:35.279569  112560 request.go:632] Waited for 198.225946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xdhg
	I0819 18:15:35.279670  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xdhg
	I0819 18:15:35.279681  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:35.279692  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:35.279700  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:35.282359  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:35.479466  112560 request.go:632] Waited for 196.34931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148-m04
	I0819 18:15:35.479522  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m04
	I0819 18:15:35.479529  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:35.479539  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:35.479545  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:35.482093  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:35.482568  112560 pod_ready.go:93] pod "kube-proxy-8xdhg" in "kube-system" namespace has status "Ready":"True"
	I0819 18:15:35.482587  112560 pod_ready.go:82] duration metric: took 401.312902ms for pod "kube-proxy-8xdhg" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:35.482596  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9g56n" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:35.678531  112560 request.go:632] Waited for 195.878016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g56n
	I0819 18:15:35.678602  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g56n
	I0819 18:15:35.678614  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:35.678625  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:35.678644  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:35.681342  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:35.879320  112560 request.go:632] Waited for 197.360229ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:15:35.879388  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:15:35.879396  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:35.879406  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:35.879416  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:35.881850  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:35.882302  112560 pod_ready.go:93] pod "kube-proxy-9g56n" in "kube-system" namespace has status "Ready":"True"
	I0819 18:15:35.882322  112560 pod_ready.go:82] duration metric: took 399.71906ms for pod "kube-proxy-9g56n" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:35.882335  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fnq4h" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:36.079437  112560 request.go:632] Waited for 197.040976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fnq4h
	I0819 18:15:36.079486  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fnq4h
	I0819 18:15:36.079493  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:36.079503  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:36.079512  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:36.082079  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:36.278924  112560 request.go:632] Waited for 196.338549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:15:36.279001  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:15:36.279012  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:36.279023  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:36.279031  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:36.281354  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:36.281949  112560 pod_ready.go:93] pod "kube-proxy-fnq4h" in "kube-system" namespace has status "Ready":"True"
	I0819 18:15:36.281969  112560 pod_ready.go:82] duration metric: took 399.626045ms for pod "kube-proxy-fnq4h" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:36.281982  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mhx8s" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:36.478919  112560 request.go:632] Waited for 196.858457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mhx8s
	I0819 18:15:36.478969  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mhx8s
	I0819 18:15:36.478974  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:36.478982  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:36.478991  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:36.481545  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:36.679152  112560 request.go:632] Waited for 196.946452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148-m03
	I0819 18:15:36.679221  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m03
	I0819 18:15:36.679231  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:36.679243  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:36.679251  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:36.681771  112560 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 18:15:36.681926  112560 pod_ready.go:98] node "ha-896148-m03" hosting pod "kube-proxy-mhx8s" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-896148-m03": nodes "ha-896148-m03" not found
	I0819 18:15:36.681948  112560 pod_ready.go:82] duration metric: took 399.954106ms for pod "kube-proxy-mhx8s" in "kube-system" namespace to be "Ready" ...
	E0819 18:15:36.681961  112560 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896148-m03" hosting pod "kube-proxy-mhx8s" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-896148-m03": nodes "ha-896148-m03" not found
	I0819 18:15:36.681970  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-896148" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:36.879111  112560 request.go:632] Waited for 197.060556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896148
	I0819 18:15:36.879200  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896148
	I0819 18:15:36.879211  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:36.879224  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:36.879237  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:36.881645  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:37.079467  112560 request.go:632] Waited for 197.349095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:15:37.079518  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:15:37.079524  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:37.079531  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:37.079536  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:37.082052  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:37.082471  112560 pod_ready.go:93] pod "kube-scheduler-ha-896148" in "kube-system" namespace has status "Ready":"True"
	I0819 18:15:37.082490  112560 pod_ready.go:82] duration metric: took 400.505158ms for pod "kube-scheduler-ha-896148" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:37.082499  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-896148-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:37.279545  112560 request.go:632] Waited for 196.965103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896148-m02
	I0819 18:15:37.279600  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896148-m02
	I0819 18:15:37.279606  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:37.279614  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:37.279620  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:37.282034  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:37.478851  112560 request.go:632] Waited for 196.354701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:15:37.478907  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:15:37.478912  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:37.478930  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:37.478935  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:37.481624  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:37.482080  112560 pod_ready.go:93] pod "kube-scheduler-ha-896148-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:15:37.482099  112560 pod_ready.go:82] duration metric: took 399.593906ms for pod "kube-scheduler-ha-896148-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:37.482108  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-896148-m03" in "kube-system" namespace to be "Ready" ...
	I0819 18:15:37.679206  112560 request.go:632] Waited for 197.032751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896148-m03
	I0819 18:15:37.679272  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896148-m03
	I0819 18:15:37.679279  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:37.679287  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:37.679295  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:37.681867  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:15:37.878623  112560 request.go:632] Waited for 196.27118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148-m03
	I0819 18:15:37.878702  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m03
	I0819 18:15:37.878711  112560 round_trippers.go:469] Request Headers:
	I0819 18:15:37.878719  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:15:37.878723  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:15:37.880986  112560 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0819 18:15:37.881116  112560 pod_ready.go:98] node "ha-896148-m03" hosting pod "kube-scheduler-ha-896148-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-896148-m03": nodes "ha-896148-m03" not found
	I0819 18:15:37.881152  112560 pod_ready.go:82] duration metric: took 399.036004ms for pod "kube-scheduler-ha-896148-m03" in "kube-system" namespace to be "Ready" ...
	E0819 18:15:37.881168  112560 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896148-m03" hosting pod "kube-scheduler-ha-896148-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-896148-m03": nodes "ha-896148-m03" not found
	I0819 18:15:37.881182  112560 pod_ready.go:39] duration metric: took 5.519931602s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:15:37.881204  112560 api_server.go:52] waiting for apiserver process to appear ...
	I0819 18:15:37.881272  112560 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:15:37.892254  112560 api_server.go:72] duration metric: took 21.808257219s to wait for apiserver process to appear ...
	I0819 18:15:37.892275  112560 api_server.go:88] waiting for apiserver healthz status ...
	I0819 18:15:37.892297  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:37.895724  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:37.895745  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:38.393344  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:38.396746  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:38.396767  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:38.893382  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:38.896911  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:38.896942  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:39.392483  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:39.396010  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:39.396035  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:39.892924  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:39.896362  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:39.896389  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:40.393021  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:40.396562  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:40.396584  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:40.893180  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:40.898430  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:40.898453  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:41.393172  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:41.396878  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:41.396919  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:41.892440  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:41.896085  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:41.896110  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:42.392645  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:42.396303  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:42.396369  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:42.892845  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:42.896550  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:42.896581  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:43.393208  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:43.396674  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:43.396700  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:43.893287  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:43.896906  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:43.896939  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:44.392481  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:44.395986  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:44.396026  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:44.892499  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:44.896023  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:44.896046  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:45.392564  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:45.396027  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:45.396064  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:45.892577  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:45.896460  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:45.896500  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:46.393063  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:46.396547  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:46.396581  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:46.892399  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:46.895885  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:46.895918  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:47.392491  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:47.395908  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:47.395933  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:47.892444  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:47.895970  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:47.895994  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:48.392497  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:48.395862  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:48.395885  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:48.893079  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:48.896524  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:48.896552  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:49.393142  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:49.396401  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:49.396425  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:49.893341  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:49.896935  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:49.896957  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:50.393043  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:50.459869  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:50.459974  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:50.892402  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:50.896141  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:50.896176  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:51.392797  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:51.396399  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:51.396423  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:51.893029  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:51.896478  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:51.896509  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:52.393108  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:52.396740  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:52.396765  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:52.893364  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:52.896989  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:52.897013  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:53.392507  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:53.396000  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:53.396020  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:53.892567  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:53.895899  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:53.895927  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:54.392428  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:54.396529  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:54.396551  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:54.893292  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:54.896677  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:54.896699  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:55.393299  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:55.397803  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:55.397837  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:55.892342  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:55.895868  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:55.895892  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:56.392418  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:56.395878  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:56.395907  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:56.892567  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:56.920610  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:56.920637  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:57.393202  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:57.396668  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:57.396690  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:57.893298  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:57.896886  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:57.896909  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:58.392418  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:58.395967  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:58.395988  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:58.892614  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:58.896681  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:58.896705  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:59.393329  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:59.396917  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:59.396945  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:15:59.892918  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:15:59.896805  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:15:59.896838  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:00.392451  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:00.395874  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:00.395897  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:00.893356  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:00.897800  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:00.897825  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:01.392351  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:01.396439  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:01.396465  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:01.893042  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:01.896644  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:01.896665  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:02.393333  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:02.396858  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:02.396881  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:02.892438  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:02.895873  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:02.895893  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:03.392421  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:03.395992  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:03.396016  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:03.892544  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:03.895962  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:03.895982  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:04.392499  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:04.398374  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:04.398402  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:04.893055  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:04.896796  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:04.896819  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:05.393389  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:05.396868  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:05.396893  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:05.892435  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:05.895863  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:05.895885  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:06.392517  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:06.395995  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:06.396022  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:06.892969  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:06.896394  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:06.896417  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:07.392997  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:07.396531  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:07.396557  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:07.892588  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:07.896077  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:07.896099  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:08.392641  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:08.395964  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:08.395986  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:08.892515  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:08.896097  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:08.896123  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:09.392633  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:09.396172  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:09.396196  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:09.893036  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:09.896569  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:09.896597  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:10.393186  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:10.397720  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:10.397742  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:10.892399  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:10.895834  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:10.895859  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:11.392361  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:11.395956  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:11.395983  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:11.892484  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:11.895915  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:11.895935  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:12.392424  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:12.395841  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:12.395861  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:12.892405  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:12.896674  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:12.896703  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:13.393273  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:13.396740  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:13.396768  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:13.893348  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:13.896748  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:13.896780  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:14.393370  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:14.396865  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:14.396889  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:14.892484  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:14.896727  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0819 18:16:14.896749  112560 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0819 18:16:15.392771  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:15.393161  112560 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused
	I0819 18:16:15.892357  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:15.892819  112560 api_server.go:269] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused
	I0819 18:16:16.393381  112560 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:16.393464  112560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:16.430831  112560 cri.go:89] found id: "c4a9f0acd07772f4d498feca8caade1e1a611d9be072fa63ec8cafb2e6809bb8"
	I0819 18:16:16.430858  112560 cri.go:89] found id: "9158977e6a9bd3f47a2d8e8538610b60f2184f9b20974a08fbb46eea52eedf72"
	I0819 18:16:16.430862  112560 cri.go:89] found id: ""
	I0819 18:16:16.430870  112560 logs.go:276] 2 containers: [c4a9f0acd07772f4d498feca8caade1e1a611d9be072fa63ec8cafb2e6809bb8 9158977e6a9bd3f47a2d8e8538610b60f2184f9b20974a08fbb46eea52eedf72]
	I0819 18:16:16.430923  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:16.434299  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:16.437599  112560 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:16.437654  112560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:16.475862  112560 cri.go:89] found id: "3c143370256de8a1d53f2e6afef0ce3e1e6429497749e1ebebee3e8506ae8dd7"
	I0819 18:16:16.475886  112560 cri.go:89] found id: "18cc54b0e1a426dc0b8ef4dd418339860db3811a6e0c08c92d29ea50ce6c5383"
	I0819 18:16:16.475892  112560 cri.go:89] found id: ""
	I0819 18:16:16.475901  112560 logs.go:276] 2 containers: [3c143370256de8a1d53f2e6afef0ce3e1e6429497749e1ebebee3e8506ae8dd7 18cc54b0e1a426dc0b8ef4dd418339860db3811a6e0c08c92d29ea50ce6c5383]
	I0819 18:16:16.475953  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:16.479452  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:16.483046  112560 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:16.483108  112560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:16.520897  112560 cri.go:89] found id: ""
	I0819 18:16:16.520926  112560 logs.go:276] 0 containers: []
	W0819 18:16:16.520937  112560 logs.go:278] No container was found matching "coredns"
	I0819 18:16:16.520954  112560 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:16.521015  112560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:16.560260  112560 cri.go:89] found id: "932d9ebe69672b8237e4146e8e7345499b1e31bb16e18ee86316b2dac45a911d"
	I0819 18:16:16.560281  112560 cri.go:89] found id: "7c02550a585e717ae833f710625eda651f6040aa850052d7d73eb685eee76aeb"
	I0819 18:16:16.560291  112560 cri.go:89] found id: ""
	I0819 18:16:16.560298  112560 logs.go:276] 2 containers: [932d9ebe69672b8237e4146e8e7345499b1e31bb16e18ee86316b2dac45a911d 7c02550a585e717ae833f710625eda651f6040aa850052d7d73eb685eee76aeb]
	I0819 18:16:16.560349  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:16.563741  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:16.567166  112560 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:16.567225  112560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:16.602564  112560 cri.go:89] found id: "a75d014a2c0608057cddd03933eab794ed9b7b7463b59e8596ddc4b9bfe4aad9"
	I0819 18:16:16.602590  112560 cri.go:89] found id: ""
	I0819 18:16:16.602600  112560 logs.go:276] 1 containers: [a75d014a2c0608057cddd03933eab794ed9b7b7463b59e8596ddc4b9bfe4aad9]
	I0819 18:16:16.602648  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:16.605972  112560 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:16.606027  112560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:16.642230  112560 cri.go:89] found id: "30929b599a5851af206badb318dfb8a1c571a1424056c3e59ed4af9b2275d546"
	I0819 18:16:16.642251  112560 cri.go:89] found id: "402a7b6f7d89af6d1f4e34237702ae871154bb586628514bcd3ddadced8b1085"
	I0819 18:16:16.642257  112560 cri.go:89] found id: ""
	I0819 18:16:16.642265  112560 logs.go:276] 2 containers: [30929b599a5851af206badb318dfb8a1c571a1424056c3e59ed4af9b2275d546 402a7b6f7d89af6d1f4e34237702ae871154bb586628514bcd3ddadced8b1085]
	I0819 18:16:16.642315  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:16.646068  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:16.649015  112560 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:16.649073  112560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:16.684237  112560 cri.go:89] found id: "7bbbd2a81577032892a0faa9fa0f03a44a620f293ff5b19d2c4accbb6cf89348"
	I0819 18:16:16.684255  112560 cri.go:89] found id: ""
	I0819 18:16:16.684262  112560 logs.go:276] 1 containers: [7bbbd2a81577032892a0faa9fa0f03a44a620f293ff5b19d2c4accbb6cf89348]
	I0819 18:16:16.684300  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:16.688523  112560 logs.go:123] Gathering logs for kube-apiserver [c4a9f0acd07772f4d498feca8caade1e1a611d9be072fa63ec8cafb2e6809bb8] ...
	I0819 18:16:16.688554  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4a9f0acd07772f4d498feca8caade1e1a611d9be072fa63ec8cafb2e6809bb8"
	I0819 18:16:16.734008  112560 logs.go:123] Gathering logs for kube-scheduler [7c02550a585e717ae833f710625eda651f6040aa850052d7d73eb685eee76aeb] ...
	I0819 18:16:16.734040  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c02550a585e717ae833f710625eda651f6040aa850052d7d73eb685eee76aeb"
	I0819 18:16:16.771216  112560 logs.go:123] Gathering logs for kube-controller-manager [402a7b6f7d89af6d1f4e34237702ae871154bb586628514bcd3ddadced8b1085] ...
	I0819 18:16:16.771241  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 402a7b6f7d89af6d1f4e34237702ae871154bb586628514bcd3ddadced8b1085"
	I0819 18:16:16.807237  112560 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:16.807263  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:16.825275  112560 logs.go:123] Gathering logs for kube-apiserver [9158977e6a9bd3f47a2d8e8538610b60f2184f9b20974a08fbb46eea52eedf72] ...
	I0819 18:16:16.825309  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9158977e6a9bd3f47a2d8e8538610b60f2184f9b20974a08fbb46eea52eedf72"
	I0819 18:16:16.861976  112560 logs.go:123] Gathering logs for kube-scheduler [932d9ebe69672b8237e4146e8e7345499b1e31bb16e18ee86316b2dac45a911d] ...
	I0819 18:16:16.862002  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 932d9ebe69672b8237e4146e8e7345499b1e31bb16e18ee86316b2dac45a911d"
	I0819 18:16:16.914361  112560 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:16.914398  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:16.998383  112560 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:16.998430  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:16:17.229170  112560 logs.go:123] Gathering logs for kube-proxy [a75d014a2c0608057cddd03933eab794ed9b7b7463b59e8596ddc4b9bfe4aad9] ...
	I0819 18:16:17.229199  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a75d014a2c0608057cddd03933eab794ed9b7b7463b59e8596ddc4b9bfe4aad9"
	I0819 18:16:17.277326  112560 logs.go:123] Gathering logs for kube-controller-manager [30929b599a5851af206badb318dfb8a1c571a1424056c3e59ed4af9b2275d546] ...
	I0819 18:16:17.277363  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30929b599a5851af206badb318dfb8a1c571a1424056c3e59ed4af9b2275d546"
	I0819 18:16:17.326885  112560 logs.go:123] Gathering logs for kindnet [7bbbd2a81577032892a0faa9fa0f03a44a620f293ff5b19d2c4accbb6cf89348] ...
	I0819 18:16:17.326913  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7bbbd2a81577032892a0faa9fa0f03a44a620f293ff5b19d2c4accbb6cf89348"
	I0819 18:16:17.364535  112560 logs.go:123] Gathering logs for etcd [3c143370256de8a1d53f2e6afef0ce3e1e6429497749e1ebebee3e8506ae8dd7] ...
	I0819 18:16:17.364566  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c143370256de8a1d53f2e6afef0ce3e1e6429497749e1ebebee3e8506ae8dd7"
	I0819 18:16:17.418186  112560 logs.go:123] Gathering logs for etcd [18cc54b0e1a426dc0b8ef4dd418339860db3811a6e0c08c92d29ea50ce6c5383] ...
	I0819 18:16:17.418215  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18cc54b0e1a426dc0b8ef4dd418339860db3811a6e0c08c92d29ea50ce6c5383"
	I0819 18:16:17.461942  112560 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:17.461973  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:17.539961  112560 logs.go:123] Gathering logs for container status ...
	I0819 18:16:17.539993  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:20.096506  112560 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 18:16:20.101337  112560 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0819 18:16:20.101402  112560 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0819 18:16:20.101410  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:20.101418  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:20.101426  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:20.106799  112560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 18:16:20.106934  112560 api_server.go:141] control plane version: v1.31.0
	I0819 18:16:20.106954  112560 api_server.go:131] duration metric: took 42.214673115s to wait for apiserver health ...
	I0819 18:16:20.106961  112560 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 18:16:20.106988  112560 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0819 18:16:20.107037  112560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 18:16:20.138087  112560 cri.go:89] found id: "c4a9f0acd07772f4d498feca8caade1e1a611d9be072fa63ec8cafb2e6809bb8"
	I0819 18:16:20.138104  112560 cri.go:89] found id: "9158977e6a9bd3f47a2d8e8538610b60f2184f9b20974a08fbb46eea52eedf72"
	I0819 18:16:20.138108  112560 cri.go:89] found id: ""
	I0819 18:16:20.138114  112560 logs.go:276] 2 containers: [c4a9f0acd07772f4d498feca8caade1e1a611d9be072fa63ec8cafb2e6809bb8 9158977e6a9bd3f47a2d8e8538610b60f2184f9b20974a08fbb46eea52eedf72]
	I0819 18:16:20.138155  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:20.141225  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:20.144003  112560 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0819 18:16:20.144051  112560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 18:16:20.174395  112560 cri.go:89] found id: "3c143370256de8a1d53f2e6afef0ce3e1e6429497749e1ebebee3e8506ae8dd7"
	I0819 18:16:20.174411  112560 cri.go:89] found id: "18cc54b0e1a426dc0b8ef4dd418339860db3811a6e0c08c92d29ea50ce6c5383"
	I0819 18:16:20.174415  112560 cri.go:89] found id: ""
	I0819 18:16:20.174421  112560 logs.go:276] 2 containers: [3c143370256de8a1d53f2e6afef0ce3e1e6429497749e1ebebee3e8506ae8dd7 18cc54b0e1a426dc0b8ef4dd418339860db3811a6e0c08c92d29ea50ce6c5383]
	I0819 18:16:20.174465  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:20.177497  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:20.180276  112560 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0819 18:16:20.180335  112560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 18:16:20.210756  112560 cri.go:89] found id: ""
	I0819 18:16:20.210779  112560 logs.go:276] 0 containers: []
	W0819 18:16:20.210787  112560 logs.go:278] No container was found matching "coredns"
	I0819 18:16:20.210793  112560 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0819 18:16:20.210886  112560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 18:16:20.242207  112560 cri.go:89] found id: "932d9ebe69672b8237e4146e8e7345499b1e31bb16e18ee86316b2dac45a911d"
	I0819 18:16:20.242230  112560 cri.go:89] found id: "7c02550a585e717ae833f710625eda651f6040aa850052d7d73eb685eee76aeb"
	I0819 18:16:20.242234  112560 cri.go:89] found id: ""
	I0819 18:16:20.242241  112560 logs.go:276] 2 containers: [932d9ebe69672b8237e4146e8e7345499b1e31bb16e18ee86316b2dac45a911d 7c02550a585e717ae833f710625eda651f6040aa850052d7d73eb685eee76aeb]
	I0819 18:16:20.242291  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:20.245539  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:20.248428  112560 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0819 18:16:20.248484  112560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 18:16:20.278993  112560 cri.go:89] found id: "a75d014a2c0608057cddd03933eab794ed9b7b7463b59e8596ddc4b9bfe4aad9"
	I0819 18:16:20.279018  112560 cri.go:89] found id: ""
	I0819 18:16:20.279026  112560 logs.go:276] 1 containers: [a75d014a2c0608057cddd03933eab794ed9b7b7463b59e8596ddc4b9bfe4aad9]
	I0819 18:16:20.279066  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:20.282361  112560 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 18:16:20.282426  112560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 18:16:20.313902  112560 cri.go:89] found id: "30929b599a5851af206badb318dfb8a1c571a1424056c3e59ed4af9b2275d546"
	I0819 18:16:20.313921  112560 cri.go:89] found id: "402a7b6f7d89af6d1f4e34237702ae871154bb586628514bcd3ddadced8b1085"
	I0819 18:16:20.313925  112560 cri.go:89] found id: ""
	I0819 18:16:20.313932  112560 logs.go:276] 2 containers: [30929b599a5851af206badb318dfb8a1c571a1424056c3e59ed4af9b2275d546 402a7b6f7d89af6d1f4e34237702ae871154bb586628514bcd3ddadced8b1085]
	I0819 18:16:20.313979  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:20.317092  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:20.319981  112560 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0819 18:16:20.320040  112560 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 18:16:20.351777  112560 cri.go:89] found id: "7bbbd2a81577032892a0faa9fa0f03a44a620f293ff5b19d2c4accbb6cf89348"
	I0819 18:16:20.351799  112560 cri.go:89] found id: ""
	I0819 18:16:20.351812  112560 logs.go:276] 1 containers: [7bbbd2a81577032892a0faa9fa0f03a44a620f293ff5b19d2c4accbb6cf89348]
	I0819 18:16:20.351867  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:20.355159  112560 logs.go:123] Gathering logs for container status ...
	I0819 18:16:20.355180  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 18:16:20.392533  112560 logs.go:123] Gathering logs for kube-apiserver [c4a9f0acd07772f4d498feca8caade1e1a611d9be072fa63ec8cafb2e6809bb8] ...
	I0819 18:16:20.392560  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4a9f0acd07772f4d498feca8caade1e1a611d9be072fa63ec8cafb2e6809bb8"
	I0819 18:16:20.429906  112560 logs.go:123] Gathering logs for kube-scheduler [932d9ebe69672b8237e4146e8e7345499b1e31bb16e18ee86316b2dac45a911d] ...
	I0819 18:16:20.429934  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 932d9ebe69672b8237e4146e8e7345499b1e31bb16e18ee86316b2dac45a911d"
	I0819 18:16:20.475273  112560 logs.go:123] Gathering logs for kube-scheduler [7c02550a585e717ae833f710625eda651f6040aa850052d7d73eb685eee76aeb] ...
	I0819 18:16:20.475304  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c02550a585e717ae833f710625eda651f6040aa850052d7d73eb685eee76aeb"
	I0819 18:16:20.506595  112560 logs.go:123] Gathering logs for kube-proxy [a75d014a2c0608057cddd03933eab794ed9b7b7463b59e8596ddc4b9bfe4aad9] ...
	I0819 18:16:20.506620  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a75d014a2c0608057cddd03933eab794ed9b7b7463b59e8596ddc4b9bfe4aad9"
	I0819 18:16:20.539273  112560 logs.go:123] Gathering logs for etcd [18cc54b0e1a426dc0b8ef4dd418339860db3811a6e0c08c92d29ea50ce6c5383] ...
	I0819 18:16:20.539296  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18cc54b0e1a426dc0b8ef4dd418339860db3811a6e0c08c92d29ea50ce6c5383"
	I0819 18:16:20.584244  112560 logs.go:123] Gathering logs for kube-controller-manager [30929b599a5851af206badb318dfb8a1c571a1424056c3e59ed4af9b2275d546] ...
	I0819 18:16:20.584279  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30929b599a5851af206badb318dfb8a1c571a1424056c3e59ed4af9b2275d546"
	I0819 18:16:20.631702  112560 logs.go:123] Gathering logs for dmesg ...
	I0819 18:16:20.631729  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 18:16:20.647145  112560 logs.go:123] Gathering logs for describe nodes ...
	I0819 18:16:20.647169  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 18:16:20.814880  112560 logs.go:123] Gathering logs for CRI-O ...
	I0819 18:16:20.814914  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0819 18:16:20.876103  112560 logs.go:123] Gathering logs for kindnet [7bbbd2a81577032892a0faa9fa0f03a44a620f293ff5b19d2c4accbb6cf89348] ...
	I0819 18:16:20.876138  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7bbbd2a81577032892a0faa9fa0f03a44a620f293ff5b19d2c4accbb6cf89348"
	I0819 18:16:20.911694  112560 logs.go:123] Gathering logs for kubelet ...
	I0819 18:16:20.911722  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 18:16:20.977863  112560 logs.go:123] Gathering logs for kube-apiserver [9158977e6a9bd3f47a2d8e8538610b60f2184f9b20974a08fbb46eea52eedf72] ...
	I0819 18:16:20.977895  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9158977e6a9bd3f47a2d8e8538610b60f2184f9b20974a08fbb46eea52eedf72"
	I0819 18:16:21.011874  112560 logs.go:123] Gathering logs for etcd [3c143370256de8a1d53f2e6afef0ce3e1e6429497749e1ebebee3e8506ae8dd7] ...
	I0819 18:16:21.011900  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c143370256de8a1d53f2e6afef0ce3e1e6429497749e1ebebee3e8506ae8dd7"
	I0819 18:16:21.057113  112560 logs.go:123] Gathering logs for kube-controller-manager [402a7b6f7d89af6d1f4e34237702ae871154bb586628514bcd3ddadced8b1085] ...
	I0819 18:16:21.057155  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 402a7b6f7d89af6d1f4e34237702ae871154bb586628514bcd3ddadced8b1085"
	I0819 18:16:23.593300  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0819 18:16:23.593319  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:23.593327  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:23.593331  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:23.603696  112560 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0819 18:16:23.611146  112560 system_pods.go:59] 19 kube-system pods found
	I0819 18:16:23.611181  112560 system_pods.go:61] "coredns-6f6b679f8f-htfhr" [edde4f34-292b-450b-a8ce-064c03cae547] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 18:16:23.611192  112560 system_pods.go:61] "coredns-6f6b679f8f-zbfmw" [e12786b7-a449-4bf2-8a7f-375d4f32f125] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 18:16:23.611198  112560 system_pods.go:61] "etcd-ha-896148" [7aa2bcd5-4f7b-4bd4-9af4-9d02408bdb5c] Running
	I0819 18:16:23.611204  112560 system_pods.go:61] "etcd-ha-896148-m02" [2fc21854-216c-43ce-a746-533c4360c699] Running
	I0819 18:16:23.611209  112560 system_pods.go:61] "kindnet-55rn2" [9ccd5ca7-fddf-4d52-b522-3d4f2d67bd2b] Running
	I0819 18:16:23.611214  112560 system_pods.go:61] "kindnet-ct9nq" [4cefec6e-186c-47b3-a99f-70be9b52c03e] Running
	I0819 18:16:23.611219  112560 system_pods.go:61] "kindnet-l5v7t" [d2f70430-6af3-4acb-aad1-84ff046396ba] Running
	I0819 18:16:23.611227  112560 system_pods.go:61] "kube-apiserver-ha-896148" [4bf16020-bfbc-4822-8197-5bee6a0c841d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 18:16:23.611236  112560 system_pods.go:61] "kube-apiserver-ha-896148-m02" [bab0f050-331f-4639-82bf-12af666ab0ed] Running
	I0819 18:16:23.611246  112560 system_pods.go:61] "kube-controller-manager-ha-896148" [42243f4e-cce2-473e-af5a-01f2ce0f6a99] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 18:16:23.611255  112560 system_pods.go:61] "kube-controller-manager-ha-896148-m02" [b39bce2e-c062-4aad-bfbb-804864ceee47] Running
	I0819 18:16:23.611261  112560 system_pods.go:61] "kube-proxy-8xdhg" [63480f07-fafc-4047-a938-45b020f4e4f4] Running
	I0819 18:16:23.611267  112560 system_pods.go:61] "kube-proxy-9g56n" [533b5b9a-eed2-479b-9a3a-8d0235563193] Running
	I0819 18:16:23.611272  112560 system_pods.go:61] "kube-proxy-fnq4h" [a8fde579-d036-47e8-867a-7b432208f105] Running
	I0819 18:16:23.611278  112560 system_pods.go:61] "kube-scheduler-ha-896148" [3fc54d47-9cfc-4460-947f-da8ba1a35bc6] Running
	I0819 18:16:23.611287  112560 system_pods.go:61] "kube-scheduler-ha-896148-m02" [e925906a-30f9-417d-8ed7-098d74776558] Running
	I0819 18:16:23.611293  112560 system_pods.go:61] "kube-vip-ha-896148" [e9b0ccd4-5e69-4aeb-b41b-bf6a4700bea3] Running
	I0819 18:16:23.611300  112560 system_pods.go:61] "kube-vip-ha-896148-m02" [af441d09-a7dd-47a6-91cc-78bc70bcf0ec] Running
	I0819 18:16:23.611307  112560 system_pods.go:61] "storage-provisioner" [6c1efb8c-65ae-4a7f-969c-6ae89ec02d92] Running
	I0819 18:16:23.611317  112560 system_pods.go:74] duration metric: took 3.504346416s to wait for pod list to return data ...
	I0819 18:16:23.611329  112560 default_sa.go:34] waiting for default service account to be created ...
	I0819 18:16:23.611399  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0819 18:16:23.611408  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:23.611418  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:23.611425  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:23.613964  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:23.614186  112560 default_sa.go:45] found service account: "default"
	I0819 18:16:23.614203  112560 default_sa.go:55] duration metric: took 2.864776ms for default service account to be created ...
	I0819 18:16:23.614212  112560 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 18:16:23.614275  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0819 18:16:23.614284  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:23.614296  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:23.614302  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:23.617670  112560 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0819 18:16:23.622907  112560 system_pods.go:86] 19 kube-system pods found
	I0819 18:16:23.622934  112560 system_pods.go:89] "coredns-6f6b679f8f-htfhr" [edde4f34-292b-450b-a8ce-064c03cae547] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 18:16:23.622945  112560 system_pods.go:89] "coredns-6f6b679f8f-zbfmw" [e12786b7-a449-4bf2-8a7f-375d4f32f125] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0819 18:16:23.622952  112560 system_pods.go:89] "etcd-ha-896148" [7aa2bcd5-4f7b-4bd4-9af4-9d02408bdb5c] Running
	I0819 18:16:23.622958  112560 system_pods.go:89] "etcd-ha-896148-m02" [2fc21854-216c-43ce-a746-533c4360c699] Running
	I0819 18:16:23.622964  112560 system_pods.go:89] "kindnet-55rn2" [9ccd5ca7-fddf-4d52-b522-3d4f2d67bd2b] Running
	I0819 18:16:23.622970  112560 system_pods.go:89] "kindnet-ct9nq" [4cefec6e-186c-47b3-a99f-70be9b52c03e] Running
	I0819 18:16:23.622976  112560 system_pods.go:89] "kindnet-l5v7t" [d2f70430-6af3-4acb-aad1-84ff046396ba] Running
	I0819 18:16:23.622988  112560 system_pods.go:89] "kube-apiserver-ha-896148" [4bf16020-bfbc-4822-8197-5bee6a0c841d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0819 18:16:23.622999  112560 system_pods.go:89] "kube-apiserver-ha-896148-m02" [bab0f050-331f-4639-82bf-12af666ab0ed] Running
	I0819 18:16:23.623011  112560 system_pods.go:89] "kube-controller-manager-ha-896148" [42243f4e-cce2-473e-af5a-01f2ce0f6a99] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0819 18:16:23.623020  112560 system_pods.go:89] "kube-controller-manager-ha-896148-m02" [b39bce2e-c062-4aad-bfbb-804864ceee47] Running
	I0819 18:16:23.623026  112560 system_pods.go:89] "kube-proxy-8xdhg" [63480f07-fafc-4047-a938-45b020f4e4f4] Running
	I0819 18:16:23.623032  112560 system_pods.go:89] "kube-proxy-9g56n" [533b5b9a-eed2-479b-9a3a-8d0235563193] Running
	I0819 18:16:23.623038  112560 system_pods.go:89] "kube-proxy-fnq4h" [a8fde579-d036-47e8-867a-7b432208f105] Running
	I0819 18:16:23.623044  112560 system_pods.go:89] "kube-scheduler-ha-896148" [3fc54d47-9cfc-4460-947f-da8ba1a35bc6] Running
	I0819 18:16:23.623050  112560 system_pods.go:89] "kube-scheduler-ha-896148-m02" [e925906a-30f9-417d-8ed7-098d74776558] Running
	I0819 18:16:23.623058  112560 system_pods.go:89] "kube-vip-ha-896148" [e9b0ccd4-5e69-4aeb-b41b-bf6a4700bea3] Running
	I0819 18:16:23.623066  112560 system_pods.go:89] "kube-vip-ha-896148-m02" [af441d09-a7dd-47a6-91cc-78bc70bcf0ec] Running
	I0819 18:16:23.623071  112560 system_pods.go:89] "storage-provisioner" [6c1efb8c-65ae-4a7f-969c-6ae89ec02d92] Running
	I0819 18:16:23.623082  112560 system_pods.go:126] duration metric: took 8.863177ms to wait for k8s-apps to be running ...
	I0819 18:16:23.623094  112560 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:16:23.623141  112560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:16:23.635879  112560 system_svc.go:56] duration metric: took 12.778258ms WaitForService to wait for kubelet
	I0819 18:16:23.635912  112560 kubeadm.go:582] duration metric: took 1m7.551917154s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:16:23.635935  112560 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:16:23.636023  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0819 18:16:23.636034  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:23.636044  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:23.636050  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:23.641279  112560 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0819 18:16:23.642391  112560 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0819 18:16:23.642413  112560 node_conditions.go:123] node cpu capacity is 8
	I0819 18:16:23.642427  112560 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0819 18:16:23.642430  112560 node_conditions.go:123] node cpu capacity is 8
	I0819 18:16:23.642434  112560 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0819 18:16:23.642437  112560 node_conditions.go:123] node cpu capacity is 8
	I0819 18:16:23.642441  112560 node_conditions.go:105] duration metric: took 6.494285ms to run NodePressure ...
	I0819 18:16:23.642451  112560 start.go:241] waiting for startup goroutines ...
	I0819 18:16:23.642473  112560 start.go:255] writing updated cluster config ...
	I0819 18:16:23.644485  112560 out.go:201] 
	I0819 18:16:23.645899  112560 config.go:182] Loaded profile config "ha-896148": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:16:23.645981  112560 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/config.json ...
	I0819 18:16:23.647565  112560 out.go:177] * Starting "ha-896148-m04" worker node in "ha-896148" cluster
	I0819 18:16:23.649051  112560 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 18:16:23.650245  112560 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 18:16:23.651305  112560 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0819 18:16:23.651323  112560 cache.go:56] Caching tarball of preloaded images
	I0819 18:16:23.651390  112560 preload.go:172] Found /home/jenkins/minikube-integration/19468-24160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0819 18:16:23.651403  112560 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0819 18:16:23.651391  112560 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 18:16:23.651483  112560 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/config.json ...
	W0819 18:16:23.670319  112560 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d is of wrong architecture
	I0819 18:16:23.670336  112560 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 18:16:23.670407  112560 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 18:16:23.670426  112560 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 18:16:23.670432  112560 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 18:16:23.670443  112560 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 18:16:23.670450  112560 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 18:16:23.671370  112560 image.go:273] response: 
	I0819 18:16:23.719777  112560 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 18:16:23.719811  112560 cache.go:194] Successfully downloaded all kic artifacts
	I0819 18:16:23.719843  112560 start.go:360] acquireMachinesLock for ha-896148-m04: {Name:mkfc16c079479595f6a0504e344bf85c4d9a9cb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 18:16:23.719897  112560 start.go:364] duration metric: took 37.494µs to acquireMachinesLock for "ha-896148-m04"
	I0819 18:16:23.719915  112560 start.go:96] Skipping create...Using existing machine configuration
	I0819 18:16:23.719920  112560 fix.go:54] fixHost starting: m04
	I0819 18:16:23.720116  112560 cli_runner.go:164] Run: docker container inspect ha-896148-m04 --format={{.State.Status}}
	I0819 18:16:23.735783  112560 fix.go:112] recreateIfNeeded on ha-896148-m04: state=Stopped err=<nil>
	W0819 18:16:23.735812  112560 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 18:16:23.737368  112560 out.go:177] * Restarting existing docker container for "ha-896148-m04" ...
	I0819 18:16:23.738461  112560 cli_runner.go:164] Run: docker start ha-896148-m04
	I0819 18:16:23.982074  112560 cli_runner.go:164] Run: docker container inspect ha-896148-m04 --format={{.State.Status}}
	I0819 18:16:23.999463  112560 kic.go:430] container "ha-896148-m04" state is running.
	I0819 18:16:23.999798  112560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896148-m04
	I0819 18:16:24.016477  112560 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/config.json ...
	I0819 18:16:24.016711  112560 machine.go:93] provisionDockerMachine start ...
	I0819 18:16:24.016778  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148-m04
	I0819 18:16:24.033089  112560 main.go:141] libmachine: Using SSH client type: native
	I0819 18:16:24.033285  112560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0819 18:16:24.033298  112560 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 18:16:24.033864  112560 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48176->127.0.0.1:32838: read: connection reset by peer
	I0819 18:16:27.156254  112560 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-896148-m04
	
	I0819 18:16:27.156280  112560 ubuntu.go:169] provisioning hostname "ha-896148-m04"
	I0819 18:16:27.156342  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148-m04
	I0819 18:16:27.172711  112560 main.go:141] libmachine: Using SSH client type: native
	I0819 18:16:27.172921  112560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0819 18:16:27.172942  112560 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-896148-m04 && echo "ha-896148-m04" | sudo tee /etc/hostname
	I0819 18:16:27.303311  112560 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-896148-m04
	
	I0819 18:16:27.303383  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148-m04
	I0819 18:16:27.319801  112560 main.go:141] libmachine: Using SSH client type: native
	I0819 18:16:27.320018  112560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0819 18:16:27.320044  112560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-896148-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-896148-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-896148-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 18:16:27.436805  112560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 18:16:27.436832  112560 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19468-24160/.minikube CaCertPath:/home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19468-24160/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19468-24160/.minikube}
	I0819 18:16:27.436854  112560 ubuntu.go:177] setting up certificates
	I0819 18:16:27.436866  112560 provision.go:84] configureAuth start
	I0819 18:16:27.436920  112560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896148-m04
	I0819 18:16:27.454066  112560 provision.go:143] copyHostCerts
	I0819 18:16:27.454102  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19468-24160/.minikube/ca.pem
	I0819 18:16:27.454132  112560 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-24160/.minikube/ca.pem, removing ...
	I0819 18:16:27.454140  112560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-24160/.minikube/ca.pem
	I0819 18:16:27.454208  112560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19468-24160/.minikube/ca.pem (1078 bytes)
	I0819 18:16:27.454293  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19468-24160/.minikube/cert.pem
	I0819 18:16:27.454313  112560 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-24160/.minikube/cert.pem, removing ...
	I0819 18:16:27.454319  112560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-24160/.minikube/cert.pem
	I0819 18:16:27.454355  112560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19468-24160/.minikube/cert.pem (1123 bytes)
	I0819 18:16:27.454411  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19468-24160/.minikube/key.pem
	I0819 18:16:27.454434  112560 exec_runner.go:144] found /home/jenkins/minikube-integration/19468-24160/.minikube/key.pem, removing ...
	I0819 18:16:27.454451  112560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19468-24160/.minikube/key.pem
	I0819 18:16:27.454488  112560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19468-24160/.minikube/key.pem (1679 bytes)
	I0819 18:16:27.454560  112560 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca-key.pem org=jenkins.ha-896148-m04 san=[127.0.0.1 192.168.49.5 ha-896148-m04 localhost minikube]
	I0819 18:16:27.541023  112560 provision.go:177] copyRemoteCerts
	I0819 18:16:27.541079  112560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 18:16:27.541137  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148-m04
	I0819 18:16:27.558965  112560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/ha-896148-m04/id_rsa Username:docker}
	I0819 18:16:27.645785  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0819 18:16:27.645854  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 18:16:27.666772  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0819 18:16:27.666838  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 18:16:27.687624  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0819 18:16:27.687669  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 18:16:27.708787  112560 provision.go:87] duration metric: took 271.909517ms to configureAuth
	I0819 18:16:27.708810  112560 ubuntu.go:193] setting minikube options for container-runtime
	I0819 18:16:27.709026  112560 config.go:182] Loaded profile config "ha-896148": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:16:27.709114  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148-m04
	I0819 18:16:27.726685  112560 main.go:141] libmachine: Using SSH client type: native
	I0819 18:16:27.726899  112560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0819 18:16:27.726923  112560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0819 18:16:27.947296  112560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0819 18:16:27.947323  112560 machine.go:96] duration metric: took 3.93059755s to provisionDockerMachine
	I0819 18:16:27.947337  112560 start.go:293] postStartSetup for "ha-896148-m04" (driver="docker")
	I0819 18:16:27.947347  112560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 18:16:27.947411  112560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 18:16:27.947452  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148-m04
	I0819 18:16:27.964731  112560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/ha-896148-m04/id_rsa Username:docker}
	I0819 18:16:28.053404  112560 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 18:16:28.056183  112560 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 18:16:28.056210  112560 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 18:16:28.056218  112560 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 18:16:28.056224  112560 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 18:16:28.056233  112560 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-24160/.minikube/addons for local assets ...
	I0819 18:16:28.056274  112560 filesync.go:126] Scanning /home/jenkins/minikube-integration/19468-24160/.minikube/files for local assets ...
	I0819 18:16:28.056339  112560 filesync.go:149] local asset: /home/jenkins/minikube-integration/19468-24160/.minikube/files/etc/ssl/certs/309662.pem -> 309662.pem in /etc/ssl/certs
	I0819 18:16:28.056349  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/files/etc/ssl/certs/309662.pem -> /etc/ssl/certs/309662.pem
	I0819 18:16:28.056427  112560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 18:16:28.064083  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/files/etc/ssl/certs/309662.pem --> /etc/ssl/certs/309662.pem (1708 bytes)
	I0819 18:16:28.085937  112560 start.go:296] duration metric: took 138.586812ms for postStartSetup
	I0819 18:16:28.086022  112560 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:16:28.086078  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148-m04
	I0819 18:16:28.102930  112560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/ha-896148-m04/id_rsa Username:docker}
	I0819 18:16:28.185623  112560 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 18:16:28.189674  112560 fix.go:56] duration metric: took 4.469749459s for fixHost
	I0819 18:16:28.189696  112560 start.go:83] releasing machines lock for "ha-896148-m04", held for 4.469788082s
	I0819 18:16:28.189759  112560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896148-m04
	I0819 18:16:28.208112  112560 out.go:177] * Found network options:
	I0819 18:16:28.209344  112560 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0819 18:16:28.210444  112560 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 18:16:28.210464  112560 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 18:16:28.210483  112560 proxy.go:119] fail to check proxy env: Error ip not in block
	W0819 18:16:28.210493  112560 proxy.go:119] fail to check proxy env: Error ip not in block
	I0819 18:16:28.210553  112560 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0819 18:16:28.210589  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148-m04
	I0819 18:16:28.210767  112560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 18:16:28.210902  112560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148-m04
	I0819 18:16:28.228233  112560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/ha-896148-m04/id_rsa Username:docker}
	I0819 18:16:28.229248  112560 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/ha-896148-m04/id_rsa Username:docker}
	I0819 18:16:28.449878  112560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 18:16:28.454031  112560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:16:28.461966  112560 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0819 18:16:28.462022  112560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 18:16:28.469911  112560 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 18:16:28.469930  112560 start.go:495] detecting cgroup driver to use...
	I0819 18:16:28.469965  112560 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 18:16:28.470010  112560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0819 18:16:28.480678  112560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0819 18:16:28.490604  112560 docker.go:217] disabling cri-docker service (if available) ...
	I0819 18:16:28.490650  112560 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 18:16:28.501750  112560 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 18:16:28.511619  112560 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 18:16:28.587502  112560 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 18:16:28.658068  112560 docker.go:233] disabling docker service ...
	I0819 18:16:28.658124  112560 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 18:16:28.668984  112560 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 18:16:28.678960  112560 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 18:16:28.751797  112560 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 18:16:28.834925  112560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 18:16:28.846018  112560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 18:16:28.860693  112560 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0819 18:16:28.860752  112560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:16:28.870162  112560 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0819 18:16:28.870223  112560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:16:28.880036  112560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:16:28.889634  112560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:16:28.899569  112560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 18:16:28.907863  112560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:16:28.916820  112560 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:16:28.925255  112560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0819 18:16:28.934033  112560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 18:16:28.942256  112560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 18:16:28.951340  112560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:16:29.028861  112560 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0819 18:16:29.163408  112560 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0819 18:16:29.163473  112560 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0819 18:16:29.166789  112560 start.go:563] Will wait 60s for crictl version
	I0819 18:16:29.166840  112560 ssh_runner.go:195] Run: which crictl
	I0819 18:16:29.169772  112560 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 18:16:29.200974  112560 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0819 18:16:29.201059  112560 ssh_runner.go:195] Run: crio --version
	I0819 18:16:29.235669  112560 ssh_runner.go:195] Run: crio --version
	I0819 18:16:29.270381  112560 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.24.6 ...
	I0819 18:16:29.271800  112560 out.go:177]   - env NO_PROXY=192.168.49.2
	I0819 18:16:29.273224  112560 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0819 18:16:29.274562  112560 cli_runner.go:164] Run: docker network inspect ha-896148 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 18:16:29.292442  112560 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 18:16:29.295833  112560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:16:29.305966  112560 mustload.go:65] Loading cluster: ha-896148
	I0819 18:16:29.306238  112560 config.go:182] Loaded profile config "ha-896148": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:16:29.306502  112560 cli_runner.go:164] Run: docker container inspect ha-896148 --format={{.State.Status}}
	I0819 18:16:29.323027  112560 host.go:66] Checking if "ha-896148" exists ...
	I0819 18:16:29.323327  112560 certs.go:68] Setting up /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148 for IP: 192.168.49.5
	I0819 18:16:29.323341  112560 certs.go:194] generating shared ca certs ...
	I0819 18:16:29.323355  112560 certs.go:226] acquiring lock for ca certs: {Name:mk29d2f357e66b5ff77917021423cbbf2fc2a40f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 18:16:29.323484  112560 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19468-24160/.minikube/ca.key
	I0819 18:16:29.323522  112560 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.key
	I0819 18:16:29.323534  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0819 18:16:29.323547  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0819 18:16:29.323559  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0819 18:16:29.323574  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0819 18:16:29.323619  112560 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/30966.pem (1338 bytes)
	W0819 18:16:29.323648  112560 certs.go:480] ignoring /home/jenkins/minikube-integration/19468-24160/.minikube/certs/30966_empty.pem, impossibly tiny 0 bytes
	I0819 18:16:29.323657  112560 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 18:16:29.323679  112560 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/ca.pem (1078 bytes)
	I0819 18:16:29.323701  112560 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/cert.pem (1123 bytes)
	I0819 18:16:29.323723  112560 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/key.pem (1679 bytes)
	I0819 18:16:29.323759  112560 certs.go:484] found cert: /home/jenkins/minikube-integration/19468-24160/.minikube/files/etc/ssl/certs/309662.pem (1708 bytes)
	I0819 18:16:29.323793  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/certs/30966.pem -> /usr/share/ca-certificates/30966.pem
	I0819 18:16:29.323806  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/files/etc/ssl/certs/309662.pem -> /usr/share/ca-certificates/309662.pem
	I0819 18:16:29.323822  112560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:16:29.323844  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 18:16:29.345897  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 18:16:29.366932  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 18:16:29.388641  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0819 18:16:29.410439  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/certs/30966.pem --> /usr/share/ca-certificates/30966.pem (1338 bytes)
	I0819 18:16:29.432922  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/files/etc/ssl/certs/309662.pem --> /usr/share/ca-certificates/309662.pem (1708 bytes)
	I0819 18:16:29.455444  112560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 18:16:29.477564  112560 ssh_runner.go:195] Run: openssl version
	I0819 18:16:29.482706  112560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 18:16:29.491273  112560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:16:29.494619  112560 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:16:29.494668  112560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 18:16:29.500881  112560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 18:16:29.509119  112560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/30966.pem && ln -fs /usr/share/ca-certificates/30966.pem /etc/ssl/certs/30966.pem"
	I0819 18:16:29.517771  112560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/30966.pem
	I0819 18:16:29.520930  112560 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 18:05 /usr/share/ca-certificates/30966.pem
	I0819 18:16:29.520979  112560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/30966.pem
	I0819 18:16:29.527172  112560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/30966.pem /etc/ssl/certs/51391683.0"
	I0819 18:16:29.535091  112560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/309662.pem && ln -fs /usr/share/ca-certificates/309662.pem /etc/ssl/certs/309662.pem"
	I0819 18:16:29.543639  112560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/309662.pem
	I0819 18:16:29.546675  112560 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 18:05 /usr/share/ca-certificates/309662.pem
	I0819 18:16:29.546731  112560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/309662.pem
	I0819 18:16:29.552678  112560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/309662.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 18:16:29.560649  112560 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 18:16:29.563732  112560 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 18:16:29.563779  112560 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.0  false true} ...
	I0819 18:16:29.563885  112560 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-896148-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-896148 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 18:16:29.563947  112560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 18:16:29.571925  112560 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 18:16:29.571990  112560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0819 18:16:29.579897  112560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0819 18:16:29.596356  112560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 18:16:29.612167  112560 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0819 18:16:29.615096  112560 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 18:16:29.624225  112560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:16:29.698851  112560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:16:29.708925  112560 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0819 18:16:29.709176  112560 config.go:182] Loaded profile config "ha-896148": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:16:29.711220  112560 out.go:177] * Verifying Kubernetes components...
	I0819 18:16:29.712532  112560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 18:16:29.795097  112560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 18:16:29.807955  112560 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19468-24160/kubeconfig
	I0819 18:16:29.808277  112560 kapi.go:59] client config for ha-896148: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/client.crt", KeyFile:"/home/jenkins/minikube-integration/19468-24160/.minikube/profiles/ha-896148/client.key", CAFile:"/home/jenkins/minikube-integration/19468-24160/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f18d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0819 18:16:29.808356  112560 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0819 18:16:29.808624  112560 node_ready.go:35] waiting up to 6m0s for node "ha-896148-m04" to be "Ready" ...
	I0819 18:16:29.808704  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m04
	I0819 18:16:29.808711  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:29.808722  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:29.808728  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:29.810734  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:30.309613  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m04
	I0819 18:16:30.309632  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:30.309640  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:30.309642  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:30.311879  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:30.809797  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m04
	I0819 18:16:30.809816  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:30.809822  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:30.809827  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:30.812011  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:31.308822  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m04
	I0819 18:16:31.308850  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:31.308855  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:31.308860  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:31.311088  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:31.808904  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m04
	I0819 18:16:31.808921  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:31.808929  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:31.808933  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:31.811250  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:31.811684  112560 node_ready.go:53] node "ha-896148-m04" has status "Ready":"Unknown"
	I0819 18:16:32.308976  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m04
	I0819 18:16:32.308995  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:32.309003  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:32.309006  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:32.311431  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:32.809371  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m04
	I0819 18:16:32.809390  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:32.809401  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:32.809405  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:32.811917  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:33.308799  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m04
	I0819 18:16:33.308818  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:33.308826  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:33.308831  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:33.311218  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:33.808832  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m04
	I0819 18:16:33.808851  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:33.808859  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:33.808863  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:33.811168  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:33.811801  112560 node_ready.go:53] node "ha-896148-m04" has status "Ready":"Unknown"
	I0819 18:16:34.309020  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m04
	I0819 18:16:34.309041  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:34.309049  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:34.309054  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:34.311332  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:34.809260  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m04
	I0819 18:16:34.809285  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:34.809297  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:34.809302  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:34.811737  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:35.309541  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m04
	I0819 18:16:35.309566  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:35.309577  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:35.309585  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:35.312134  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:35.808891  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m04
	I0819 18:16:35.808909  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:35.808915  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:35.808919  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:35.811342  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:35.811902  112560 node_ready.go:53] node "ha-896148-m04" has status "Ready":"Unknown"
	I0819 18:16:36.309106  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m04
	I0819 18:16:36.309139  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:36.309149  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:36.309157  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:36.311356  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:36.809279  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m04
	I0819 18:16:36.809299  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:36.809309  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:36.809314  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:36.811394  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:37.309212  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m04
	I0819 18:16:37.309236  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:37.309247  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:37.309253  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:37.311680  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:37.312236  112560 node_ready.go:49] node "ha-896148-m04" has status "Ready":"True"
	I0819 18:16:37.312257  112560 node_ready.go:38] duration metric: took 7.50361793s for node "ha-896148-m04" to be "Ready" ...
	I0819 18:16:37.312268  112560 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:16:37.312338  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0819 18:16:37.312350  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:37.312360  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:37.312368  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:37.316903  112560 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0819 18:16:37.324023  112560 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-htfhr" in "kube-system" namespace to be "Ready" ...
	I0819 18:16:37.324119  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:37.324131  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:37.324139  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:37.324150  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:37.326111  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:37.326615  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:37.326631  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:37.326639  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:37.326642  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:37.328412  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:37.825209  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:37.825228  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:37.825236  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:37.825239  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:37.827802  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:37.828437  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:37.828453  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:37.828460  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:37.828464  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:37.830529  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:38.324249  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:38.324269  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:38.324276  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:38.324281  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:38.326502  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:38.327095  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:38.327108  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:38.327115  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:38.327120  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:38.328941  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:38.824756  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:38.824774  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:38.824782  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:38.824785  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:38.827017  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:38.827702  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:38.827719  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:38.827726  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:38.827731  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:38.829546  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:39.324305  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:39.324326  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:39.324334  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:39.324339  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:39.326981  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:39.327618  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:39.327637  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:39.327644  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:39.327648  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:39.329962  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:39.330383  112560 pod_ready.go:103] pod "coredns-6f6b679f8f-htfhr" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:39.824831  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:39.824849  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:39.824859  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:39.824862  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:39.827076  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:39.827788  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:39.827809  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:39.827820  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:39.827827  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:39.829867  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:40.324628  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:40.324648  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:40.324656  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:40.324659  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:40.326934  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:40.327482  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:40.327497  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:40.327506  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:40.327511  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:40.329425  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:40.824190  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:40.824211  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:40.824220  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:40.824230  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:40.827014  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:40.827581  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:40.827598  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:40.827608  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:40.827614  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:40.829739  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:41.324568  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:41.324589  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:41.324597  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:41.324601  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:41.326609  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:41.327192  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:41.327207  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:41.327213  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:41.327216  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:41.329025  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:41.824901  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:41.824922  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:41.824932  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:41.824937  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:41.827580  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:41.828220  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:41.828234  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:41.828240  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:41.828245  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:41.830263  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:41.830690  112560 pod_ready.go:103] pod "coredns-6f6b679f8f-htfhr" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:42.325097  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:42.325115  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:42.325152  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:42.325160  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:42.327715  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:42.328359  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:42.328375  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:42.328382  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:42.328388  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:42.330359  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:42.825213  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:42.825233  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:42.825240  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:42.825247  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:42.827816  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:42.828415  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:42.828432  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:42.828442  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:42.828447  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:42.830672  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:43.324410  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:43.324435  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:43.324446  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:43.324453  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:43.326942  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:43.327584  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:43.327604  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:43.327615  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:43.327625  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:43.329546  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:43.824254  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:43.824274  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:43.824282  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:43.824289  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:43.827174  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:43.827773  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:43.827790  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:43.827797  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:43.827801  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:43.829730  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:44.324525  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:44.324544  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:44.324551  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:44.324555  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:44.327190  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:44.327767  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:44.327783  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:44.327794  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:44.327798  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:44.330113  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:44.330537  112560 pod_ready.go:103] pod "coredns-6f6b679f8f-htfhr" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:44.825005  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:44.825024  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:44.825031  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:44.825036  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:44.827478  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:44.828114  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:44.828132  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:44.828139  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:44.828143  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:44.830193  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:45.324237  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:45.324257  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:45.324265  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:45.324271  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:45.326840  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:45.327469  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:45.327484  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:45.327491  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:45.327497  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:45.329597  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:45.824360  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:45.824379  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:45.824386  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:45.824391  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:45.827061  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:45.827706  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:45.827721  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:45.827729  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:45.827734  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:45.829725  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:46.324488  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:46.324509  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:46.324526  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:46.324529  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:46.327041  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:46.327584  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:46.327599  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:46.327606  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:46.327610  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:46.329538  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:46.824491  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:46.824512  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:46.824519  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:46.824524  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:46.827165  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:46.827820  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:46.827839  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:46.827845  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:46.827850  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:46.829742  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:46.830242  112560 pod_ready.go:103] pod "coredns-6f6b679f8f-htfhr" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:47.324737  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:47.324756  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:47.324763  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:47.324767  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:47.327489  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:47.328127  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:47.328144  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:47.328150  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:47.328154  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:47.330477  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:47.825192  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:47.825214  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:47.825224  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:47.825229  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:47.827745  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:47.828321  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:47.828340  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:47.828349  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:47.828355  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:47.830337  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:48.324170  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:48.324189  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:48.324197  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:48.324200  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:48.326639  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:48.327272  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:48.327288  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:48.327297  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:48.327302  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:48.329249  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:48.825100  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:48.825136  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:48.825147  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:48.825157  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:48.827346  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:48.827960  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:48.827974  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:48.827982  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:48.827987  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:48.829859  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:48.830285  112560 pod_ready.go:103] pod "coredns-6f6b679f8f-htfhr" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:49.324608  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:49.324628  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:49.324635  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:49.324640  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:49.326992  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:49.327600  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:49.327617  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:49.327626  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:49.327630  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:49.329591  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:49.824765  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:49.824784  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:49.824793  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:49.824796  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:49.827349  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:49.827963  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:49.827980  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:49.827987  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:49.827991  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:49.829855  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:50.324249  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:50.324267  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:50.324275  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:50.324279  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:50.326643  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:50.327308  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:50.327323  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:50.327330  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:50.327334  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:50.329311  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:50.825112  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:50.825144  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:50.825152  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:50.825156  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:50.827655  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:50.828315  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:50.828331  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:50.828337  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:50.828341  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:50.830409  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:50.830909  112560 pod_ready.go:103] pod "coredns-6f6b679f8f-htfhr" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:51.324209  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:51.324229  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:51.324236  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:51.324240  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:51.326812  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:51.327426  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:51.327438  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:51.327445  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:51.327452  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:51.329306  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:51.825106  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:51.825138  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:51.825147  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:51.825150  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:51.827763  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:51.828365  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:51.828381  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:51.828389  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:51.828396  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:51.830472  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:52.324197  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:52.324215  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:52.324222  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:52.324225  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:52.326509  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:52.327161  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:52.327179  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:52.327189  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:52.327193  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:52.329068  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:52.824896  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:52.824915  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:52.824923  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:52.824930  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:52.827560  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:52.828223  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:52.828238  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:52.828245  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:52.828250  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:52.830386  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:53.324192  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:53.324215  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:53.324223  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:53.324226  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:53.326691  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:53.327286  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:53.327301  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:53.327308  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:53.327311  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:53.329189  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:53.329650  112560 pod_ready.go:103] pod "coredns-6f6b679f8f-htfhr" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:53.824992  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:53.825011  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:53.825018  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:53.825023  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:53.827401  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:53.827983  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:53.828000  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:53.828007  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:53.828012  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:53.829915  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:54.324780  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:54.324799  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:54.324807  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:54.324813  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:54.327220  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:54.327856  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:54.327870  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:54.327877  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:54.327881  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:54.329900  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:54.824839  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:54.824858  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:54.824866  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:54.824871  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:54.827154  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:54.827858  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:54.827874  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:54.827881  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:54.827885  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:54.830035  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:55.324834  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:55.324856  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:55.324866  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:55.324873  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:55.327488  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:55.328026  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:55.328040  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:55.328048  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:55.328052  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:55.330347  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:55.330778  112560 pod_ready.go:103] pod "coredns-6f6b679f8f-htfhr" in "kube-system" namespace has status "Ready":"False"
	I0819 18:16:55.825243  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:55.825263  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:55.825272  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:55.825278  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:55.827846  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:55.828436  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:55.828451  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:55.828459  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:55.828463  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:55.830387  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:56.325227  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:56.325248  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:56.325259  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:56.325264  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:56.327715  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:56.328323  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:56.328338  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:56.328347  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:56.328355  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:56.330350  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:56.824309  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:56.824330  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:56.824340  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:56.824346  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:56.826823  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:56.827492  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:56.827507  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:56.827515  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:56.827521  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:56.829543  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:57.324267  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-htfhr
	I0819 18:16:57.324288  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:57.324298  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:57.324304  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:57.326950  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:57.327567  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:57.327584  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:57.327603  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:57.327612  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:57.329694  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:57.330147  112560 pod_ready.go:98] node "ha-896148" hosting pod "coredns-6f6b679f8f-htfhr" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896148" has status "Ready":"Unknown"
	I0819 18:16:57.330168  112560 pod_ready.go:82] duration metric: took 20.006120314s for pod "coredns-6f6b679f8f-htfhr" in "kube-system" namespace to be "Ready" ...
	E0819 18:16:57.330179  112560 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896148" hosting pod "coredns-6f6b679f8f-htfhr" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896148" has status "Ready":"Unknown"
	I0819 18:16:57.330188  112560 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-zbfmw" in "kube-system" namespace to be "Ready" ...
	I0819 18:16:57.330238  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-zbfmw
	I0819 18:16:57.330248  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:57.330257  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:57.330263  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:57.332191  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:57.332691  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:57.332706  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:57.332715  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:57.332720  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:57.334735  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:57.335276  112560 pod_ready.go:98] node "ha-896148" hosting pod "coredns-6f6b679f8f-zbfmw" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896148" has status "Ready":"Unknown"
	I0819 18:16:57.335300  112560 pod_ready.go:82] duration metric: took 5.104403ms for pod "coredns-6f6b679f8f-zbfmw" in "kube-system" namespace to be "Ready" ...
	E0819 18:16:57.335311  112560 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896148" hosting pod "coredns-6f6b679f8f-zbfmw" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896148" has status "Ready":"Unknown"
	I0819 18:16:57.335320  112560 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-896148" in "kube-system" namespace to be "Ready" ...
	I0819 18:16:57.335380  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896148
	I0819 18:16:57.335390  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:57.335400  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:57.335406  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:57.337147  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:57.337724  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:57.337740  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:57.337749  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:57.337754  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:57.339440  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:57.339967  112560 pod_ready.go:98] node "ha-896148" hosting pod "etcd-ha-896148" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896148" has status "Ready":"Unknown"
	I0819 18:16:57.339987  112560 pod_ready.go:82] duration metric: took 4.65617ms for pod "etcd-ha-896148" in "kube-system" namespace to be "Ready" ...
	E0819 18:16:57.339999  112560 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896148" hosting pod "etcd-ha-896148" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896148" has status "Ready":"Unknown"
	I0819 18:16:57.340007  112560 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-896148-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:16:57.340058  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-896148-m02
	I0819 18:16:57.340067  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:57.340076  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:57.340087  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:57.341858  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:57.342366  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:16:57.342381  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:57.342390  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:57.342395  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:57.344201  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:57.344668  112560 pod_ready.go:93] pod "etcd-ha-896148-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:16:57.344685  112560 pod_ready.go:82] duration metric: took 4.667116ms for pod "etcd-ha-896148-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:16:57.344709  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-896148" in "kube-system" namespace to be "Ready" ...
	I0819 18:16:57.344806  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896148
	I0819 18:16:57.344817  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:57.344825  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:57.344830  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:57.346665  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:57.347214  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:57.347227  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:57.347236  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:57.347245  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:57.348793  112560 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0819 18:16:57.349179  112560 pod_ready.go:98] node "ha-896148" hosting pod "kube-apiserver-ha-896148" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896148" has status "Ready":"Unknown"
	I0819 18:16:57.349196  112560 pod_ready.go:82] duration metric: took 4.477092ms for pod "kube-apiserver-ha-896148" in "kube-system" namespace to be "Ready" ...
	E0819 18:16:57.349203  112560 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896148" hosting pod "kube-apiserver-ha-896148" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896148" has status "Ready":"Unknown"
	I0819 18:16:57.349208  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-896148-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:16:57.524579  112560 request.go:632] Waited for 175.320328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896148-m02
	I0819 18:16:57.524657  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-896148-m02
	I0819 18:16:57.524669  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:57.524687  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:57.524693  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:57.527294  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:57.725197  112560 request.go:632] Waited for 197.293889ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:16:57.725274  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:16:57.725284  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:57.725296  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:57.725304  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:57.727577  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:57.728004  112560 pod_ready.go:93] pod "kube-apiserver-ha-896148-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:16:57.728024  112560 pod_ready.go:82] duration metric: took 378.809063ms for pod "kube-apiserver-ha-896148-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:16:57.728037  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-896148" in "kube-system" namespace to be "Ready" ...
	I0819 18:16:57.925182  112560 request.go:632] Waited for 197.082127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896148
	I0819 18:16:57.925251  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896148
	I0819 18:16:57.925262  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:57.925272  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:57.925291  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:57.927981  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:58.124914  112560 request.go:632] Waited for 196.348269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:58.124997  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:58.125009  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:58.125018  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:58.125025  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:58.127522  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:58.128134  112560 pod_ready.go:98] node "ha-896148" hosting pod "kube-controller-manager-ha-896148" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896148" has status "Ready":"Unknown"
	I0819 18:16:58.128153  112560 pod_ready.go:82] duration metric: took 400.108309ms for pod "kube-controller-manager-ha-896148" in "kube-system" namespace to be "Ready" ...
	E0819 18:16:58.128162  112560 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896148" hosting pod "kube-controller-manager-ha-896148" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896148" has status "Ready":"Unknown"
	I0819 18:16:58.128169  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-896148-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:16:58.325107  112560 request.go:632] Waited for 196.855303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896148-m02
	I0819 18:16:58.325179  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-896148-m02
	I0819 18:16:58.325188  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:58.325196  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:58.325200  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:58.327588  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:58.524493  112560 request.go:632] Waited for 196.31139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:16:58.524574  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:16:58.524585  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:58.524597  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:58.524610  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:58.527173  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:58.527593  112560 pod_ready.go:93] pod "kube-controller-manager-ha-896148-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:16:58.527617  112560 pod_ready.go:82] duration metric: took 399.439559ms for pod "kube-controller-manager-ha-896148-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:16:58.527628  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8xdhg" in "kube-system" namespace to be "Ready" ...
	I0819 18:16:58.724707  112560 request.go:632] Waited for 197.018788ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xdhg
	I0819 18:16:58.724760  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8xdhg
	I0819 18:16:58.724766  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:58.724773  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:58.724779  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:58.727475  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:58.924306  112560 request.go:632] Waited for 196.282352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148-m04
	I0819 18:16:58.924370  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m04
	I0819 18:16:58.924375  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:58.924382  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:58.924386  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:58.927000  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:58.927474  112560 pod_ready.go:93] pod "kube-proxy-8xdhg" in "kube-system" namespace has status "Ready":"True"
	I0819 18:16:58.927492  112560 pod_ready.go:82] duration metric: took 399.855012ms for pod "kube-proxy-8xdhg" in "kube-system" namespace to be "Ready" ...
	I0819 18:16:58.927502  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9g56n" in "kube-system" namespace to be "Ready" ...
	I0819 18:16:59.124649  112560 request.go:632] Waited for 197.090452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g56n
	I0819 18:16:59.124722  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9g56n
	I0819 18:16:59.124730  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:59.124738  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:59.124745  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:59.127452  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:59.325333  112560 request.go:632] Waited for 197.334207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:16:59.325415  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:16:59.325425  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:59.325432  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:59.325437  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:59.327988  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:59.328432  112560 pod_ready.go:93] pod "kube-proxy-9g56n" in "kube-system" namespace has status "Ready":"True"
	I0819 18:16:59.328453  112560 pod_ready.go:82] duration metric: took 400.943936ms for pod "kube-proxy-9g56n" in "kube-system" namespace to be "Ready" ...
	I0819 18:16:59.328464  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fnq4h" in "kube-system" namespace to be "Ready" ...
	I0819 18:16:59.524602  112560 request.go:632] Waited for 196.063607ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fnq4h
	I0819 18:16:59.524690  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fnq4h
	I0819 18:16:59.524699  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:59.524706  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:59.524710  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:59.527244  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:59.724809  112560 request.go:632] Waited for 197.012104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:59.724865  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:16:59.724870  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:59.724877  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:59.724885  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:59.727249  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:16:59.727685  112560 pod_ready.go:98] node "ha-896148" hosting pod "kube-proxy-fnq4h" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896148" has status "Ready":"Unknown"
	I0819 18:16:59.727704  112560 pod_ready.go:82] duration metric: took 399.233125ms for pod "kube-proxy-fnq4h" in "kube-system" namespace to be "Ready" ...
	E0819 18:16:59.727712  112560 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896148" hosting pod "kube-proxy-fnq4h" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896148" has status "Ready":"Unknown"
	I0819 18:16:59.727718  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-896148" in "kube-system" namespace to be "Ready" ...
	I0819 18:16:59.924792  112560 request.go:632] Waited for 197.007368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896148
	I0819 18:16:59.924864  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896148
	I0819 18:16:59.924869  112560 round_trippers.go:469] Request Headers:
	I0819 18:16:59.924877  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:16:59.924881  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:16:59.927444  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:17:00.124281  112560 request.go:632] Waited for 196.27669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:17:00.124339  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148
	I0819 18:17:00.124345  112560 round_trippers.go:469] Request Headers:
	I0819 18:17:00.124352  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:17:00.124356  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:17:00.127171  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:17:00.127656  112560 pod_ready.go:98] node "ha-896148" hosting pod "kube-scheduler-ha-896148" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896148" has status "Ready":"Unknown"
	I0819 18:17:00.127677  112560 pod_ready.go:82] duration metric: took 399.950662ms for pod "kube-scheduler-ha-896148" in "kube-system" namespace to be "Ready" ...
	E0819 18:17:00.127686  112560 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-896148" hosting pod "kube-scheduler-ha-896148" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-896148" has status "Ready":"Unknown"
	I0819 18:17:00.127694  112560 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-896148-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:00.324681  112560 request.go:632] Waited for 196.922197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896148-m02
	I0819 18:17:00.324759  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-896148-m02
	I0819 18:17:00.324769  112560 round_trippers.go:469] Request Headers:
	I0819 18:17:00.324777  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:17:00.324786  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:17:00.327307  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:17:00.525173  112560 request.go:632] Waited for 197.34117ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:17:00.525220  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-896148-m02
	I0819 18:17:00.525225  112560 round_trippers.go:469] Request Headers:
	I0819 18:17:00.525233  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:17:00.525237  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:17:00.527596  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:17:00.528043  112560 pod_ready.go:93] pod "kube-scheduler-ha-896148-m02" in "kube-system" namespace has status "Ready":"True"
	I0819 18:17:00.528062  112560 pod_ready.go:82] duration metric: took 400.359026ms for pod "kube-scheduler-ha-896148-m02" in "kube-system" namespace to be "Ready" ...
	I0819 18:17:00.528075  112560 pod_ready.go:39] duration metric: took 23.215796542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 18:17:00.528087  112560 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 18:17:00.528131  112560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:17:00.538646  112560 system_svc.go:56] duration metric: took 10.552121ms WaitForService to wait for kubelet
	I0819 18:17:00.538667  112560 kubeadm.go:582] duration metric: took 30.829700353s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 18:17:00.538683  112560 node_conditions.go:102] verifying NodePressure condition ...
	I0819 18:17:00.725076  112560 request.go:632] Waited for 186.328371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0819 18:17:00.725162  112560 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0819 18:17:00.725172  112560 round_trippers.go:469] Request Headers:
	I0819 18:17:00.725180  112560 round_trippers.go:473]     Accept: application/json, */*
	I0819 18:17:00.725184  112560 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0819 18:17:00.728006  112560 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0819 18:17:00.728944  112560 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0819 18:17:00.728961  112560 node_conditions.go:123] node cpu capacity is 8
	I0819 18:17:00.728971  112560 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0819 18:17:00.728974  112560 node_conditions.go:123] node cpu capacity is 8
	I0819 18:17:00.728978  112560 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0819 18:17:00.728981  112560 node_conditions.go:123] node cpu capacity is 8
	I0819 18:17:00.728987  112560 node_conditions.go:105] duration metric: took 190.297435ms to run NodePressure ...
	I0819 18:17:00.728997  112560 start.go:241] waiting for startup goroutines ...
	I0819 18:17:00.729017  112560 start.go:255] writing updated cluster config ...
	I0819 18:17:00.729317  112560 ssh_runner.go:195] Run: rm -f paused
	I0819 18:17:00.773761  112560 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 18:17:00.776065  112560 out.go:177] * Done! kubectl is now configured to use "ha-896148" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 19 18:16:21 ha-896148 crio[688]: time="2024-08-19 18:16:21.047134045Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f2e2ad9ec21c99b69fb0c4f0df2438435626bcb6a2b089bebb0304c53dade741/merged/etc/group: no such file or directory"
	Aug 19 18:16:21 ha-896148 crio[688]: time="2024-08-19 18:16:21.080533705Z" level=info msg="Created container 3da7b270c38ebb7f637a92621c5771353709f7d4d48f8b47e1f974cbc7003d67: kube-system/storage-provisioner/storage-provisioner" id=1bd882e8-150c-4db2-b16b-0094ce11540d name=/runtime.v1.RuntimeService/CreateContainer
	Aug 19 18:16:21 ha-896148 crio[688]: time="2024-08-19 18:16:21.081074817Z" level=info msg="Starting container: 3da7b270c38ebb7f637a92621c5771353709f7d4d48f8b47e1f974cbc7003d67" id=94210a1c-0f9c-4d1f-8e80-4e9b23efbcc4 name=/runtime.v1.RuntimeService/StartContainer
	Aug 19 18:16:21 ha-896148 crio[688]: time="2024-08-19 18:16:21.086646712Z" level=info msg="Started container" PID=2058 containerID=3da7b270c38ebb7f637a92621c5771353709f7d4d48f8b47e1f974cbc7003d67 description=kube-system/storage-provisioner/storage-provisioner id=94210a1c-0f9c-4d1f-8e80-4e9b23efbcc4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a397964c3a9592ca2db1f8d8e8cd90af8464adf57e49832c3a3b8f4b6bb865b5
	Aug 19 18:16:28 ha-896148 crio[688]: time="2024-08-19 18:16:28.867965378Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.0" id=9f6106dc-f336-4286-b95f-324a8fc8dbe5 name=/runtime.v1.ImageService/ImageStatus
	Aug 19 18:16:28 ha-896148 crio[688]: time="2024-08-19 18:16:28.868262027Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d],Size_:89437512,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=9f6106dc-f336-4286-b95f-324a8fc8dbe5 name=/runtime.v1.ImageService/ImageStatus
	Aug 19 18:16:28 ha-896148 crio[688]: time="2024-08-19 18:16:28.868868790Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.0" id=a7e51579-3c92-4f06-aae9-e1df75d7084a name=/runtime.v1.ImageService/ImageStatus
	Aug 19 18:16:28 ha-896148 crio[688]: time="2024-08-19 18:16:28.869076755Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.0],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d],Size_:89437512,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=a7e51579-3c92-4f06-aae9-e1df75d7084a name=/runtime.v1.ImageService/ImageStatus
	Aug 19 18:16:28 ha-896148 crio[688]: time="2024-08-19 18:16:28.869799717Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-896148/kube-controller-manager" id=a8c57cd6-b12c-4731-b186-24d4f16d9b93 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 19 18:16:28 ha-896148 crio[688]: time="2024-08-19 18:16:28.869900788Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 19 18:16:28 ha-896148 crio[688]: time="2024-08-19 18:16:28.937821318Z" level=info msg="Created container b15dd256b126346b6b5b2e30c46c58672e6a3f4c94df759ef09c7dd1c1984e35: kube-system/kube-controller-manager-ha-896148/kube-controller-manager" id=a8c57cd6-b12c-4731-b186-24d4f16d9b93 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 19 18:16:28 ha-896148 crio[688]: time="2024-08-19 18:16:28.938442574Z" level=info msg="Starting container: b15dd256b126346b6b5b2e30c46c58672e6a3f4c94df759ef09c7dd1c1984e35" id=96921a6a-5244-4932-a905-857e73f3cfa0 name=/runtime.v1.RuntimeService/StartContainer
	Aug 19 18:16:28 ha-896148 crio[688]: time="2024-08-19 18:16:28.944869860Z" level=info msg="Started container" PID=2104 containerID=b15dd256b126346b6b5b2e30c46c58672e6a3f4c94df759ef09c7dd1c1984e35 description=kube-system/kube-controller-manager-ha-896148/kube-controller-manager id=96921a6a-5244-4932-a905-857e73f3cfa0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ae51507884ce1aade2e8f0d3fd06189b67fd2c990eccd18a9e0aa03d7a65b575
	Aug 19 18:16:31 ha-896148 crio[688]: time="2024-08-19 18:16:31.159187729Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Aug 19 18:16:31 ha-896148 crio[688]: time="2024-08-19 18:16:31.163367943Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 19 18:16:31 ha-896148 crio[688]: time="2024-08-19 18:16:31.163391976Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 19 18:16:31 ha-896148 crio[688]: time="2024-08-19 18:16:31.163403838Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 19 18:16:31 ha-896148 crio[688]: time="2024-08-19 18:16:31.166599740Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 19 18:16:31 ha-896148 crio[688]: time="2024-08-19 18:16:31.166621233Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 19 18:16:31 ha-896148 crio[688]: time="2024-08-19 18:16:31.166631890Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Aug 19 18:16:31 ha-896148 crio[688]: time="2024-08-19 18:16:31.169502912Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 19 18:16:31 ha-896148 crio[688]: time="2024-08-19 18:16:31.169525581Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 19 18:16:31 ha-896148 crio[688]: time="2024-08-19 18:16:31.169540616Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Aug 19 18:16:31 ha-896148 crio[688]: time="2024-08-19 18:16:31.172586280Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 19 18:16:31 ha-896148 crio[688]: time="2024-08-19 18:16:31.172613203Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b15dd256b1263       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   33 seconds ago       Running             kube-controller-manager   6                   ae51507884ce1       kube-controller-manager-ha-896148
	3da7b270c38eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   41 seconds ago       Running             storage-provisioner       5                   a397964c3a959       storage-provisioner
	8d897867518f2       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   43 seconds ago       Running             kube-vip                  3                   d42f4a6200900       kube-vip-ha-896148
	54e722469e54e       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   46 seconds ago       Running             kube-apiserver            4                   068ca83fbca4b       kube-apiserver-ha-896148
	7c7460c4157b9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Running             coredns                   2                   572d49b03c3e1       coredns-6f6b679f8f-htfhr
	2f02e57ea7992       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   About a minute ago   Running             busybox                   2                   f09146cc8e7a2       busybox-7dff88458-bvhd5
	12f4a216c59f6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Running             coredns                   2                   04718c9098d27       coredns-6f6b679f8f-zbfmw
	20d1c0e622a4c       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f   About a minute ago   Running             kindnet-cni               2                   eda846ccb7ec9       kindnet-ct9nq
	05d677ce1d1ac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   About a minute ago   Exited              storage-provisioner       4                   a397964c3a959       storage-provisioner
	bc2f3e710549f       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   About a minute ago   Running             kube-proxy                2                   0db697aa96b18       kube-proxy-fnq4h
	5d43ad9c6e09a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   About a minute ago   Exited              kube-controller-manager   5                   ae51507884ce1       kube-controller-manager-ha-896148
	120c1d6c3236d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Running             etcd                      2                   2be17f555acbd       etcd-ha-896148
	da9fe261b2d02       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   About a minute ago   Exited              kube-apiserver            3                   068ca83fbca4b       kube-apiserver-ha-896148
	c0164f5ef60b4       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   About a minute ago   Running             kube-scheduler            2                   fdb1f5f340669       kube-scheduler-ha-896148
	e3e0d2df71f28       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   About a minute ago   Exited              kube-vip                  2                   d42f4a6200900       kube-vip-ha-896148
	
	
	==> coredns [12f4a216c59f625c6cc791900791b4519e73bf4249fc508b977ebb8c691b0e83] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51220 - 20223 "HINFO IN 3740175419442443134.6290476608891558239. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00629269s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1619655296]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 18:15:50.764) (total time: 30001ms):
	Trace[1619655296]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:16:20.765)
	Trace[1619655296]: [30.00105648s] [30.00105648s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[306327137]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 18:15:50.764) (total time: 30001ms):
	Trace[306327137]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:16:20.765)
	Trace[306327137]: [30.001159972s] [30.001159972s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[589050676]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 18:15:50.764) (total time: 30001ms):
	Trace[589050676]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:16:20.765)
	Trace[589050676]: [30.001260512s] [30.001260512s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [7c7460c4157b9f949511457cf8da482882a6b124c973e60cb56c4f07d12e5a81] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34447 - 5011 "HINFO IN 3618214305841559408.3966851018621970075. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00628333s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[226661988]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 18:15:50.775) (total time: 30001ms):
	Trace[226661988]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:16:20.776)
	Trace[226661988]: [30.001221367s] [30.001221367s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1914357255]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 18:15:50.775) (total time: 30001ms):
	Trace[1914357255]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:16:20.776)
	Trace[1914357255]: [30.001275351s] [30.001275351s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[93257033]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Aug-2024 18:15:50.775) (total time: 30001ms):
	Trace[93257033]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:16:20.777)
	Trace[93257033]: [30.001352942s] [30.001352942s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-896148
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-896148
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=ha-896148
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T18_08_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:08:28 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-896148
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:17:02 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 19 Aug 2024 18:15:33 +0000   Mon, 19 Aug 2024 18:16:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 19 Aug 2024 18:15:33 +0000   Mon, 19 Aug 2024 18:16:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 19 Aug 2024 18:15:33 +0000   Mon, 19 Aug 2024 18:16:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 19 Aug 2024 18:15:33 +0000   Mon, 19 Aug 2024 18:16:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-896148
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 388c2d3270f54d85b44938c5e6625f76
	  System UUID:                deb23d25-2b75-4680-b5d6-6fa7603ac3a3
	  Boot ID:                    78fba809-e96d-46e8-9b80-0c45215ddcd4
	  Kernel Version:             5.15.0-1066-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bvhd5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 coredns-6f6b679f8f-htfhr             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m28s
	  kube-system                 coredns-6f6b679f8f-zbfmw             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m28s
	  kube-system                 etcd-ha-896148                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m32s
	  kube-system                 kindnet-ct9nq                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m28s
	  kube-system                 kube-apiserver-ha-896148             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m32s
	  kube-system                 kube-controller-manager-ha-896148    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m32s
	  kube-system                 kube-proxy-fnq4h                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 kube-scheduler-ha-896148             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m32s
	  kube-system                 kube-vip-ha-896148                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m27s                  kube-proxy       
	  Normal   Starting                 72s                    kube-proxy       
	  Normal   Starting                 4m11s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    8m39s (x8 over 8m39s)  kubelet          Node ha-896148 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 8m39s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m39s (x8 over 8m39s)  kubelet          Node ha-896148 status is now: NodeHasSufficientMemory
	  Normal   Starting                 8m39s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     8m39s (x7 over 8m39s)  kubelet          Node ha-896148 status is now: NodeHasSufficientPID
	  Normal   Starting                 8m32s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     8m32s                  kubelet          Node ha-896148 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  8m32s                  kubelet          Node ha-896148 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m32s                  kubelet          Node ha-896148 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 8m32s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           8m28s                  node-controller  Node ha-896148 event: Registered Node ha-896148 in Controller
	  Normal   NodeReady                8m15s                  kubelet          Node ha-896148 status is now: NodeReady
	  Normal   RegisteredNode           8m7s                   node-controller  Node ha-896148 event: Registered Node ha-896148 in Controller
	  Normal   RegisteredNode           7m30s                  node-controller  Node ha-896148 event: Registered Node ha-896148 in Controller
	  Normal   RegisteredNode           5m43s                  node-controller  Node ha-896148 event: Registered Node ha-896148 in Controller
	  Normal   NodeHasSufficientMemory  5m2s (x8 over 5m2s)    kubelet          Node ha-896148 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m2s (x8 over 5m2s)    kubelet          Node ha-896148 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m2s (x7 over 5m2s)    kubelet          Node ha-896148 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 5m2s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 5m2s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           4m26s                  node-controller  Node ha-896148 event: Registered Node ha-896148 in Controller
	  Normal   RegisteredNode           4m23s                  node-controller  Node ha-896148 event: Registered Node ha-896148 in Controller
	  Normal   RegisteredNode           3m50s                  node-controller  Node ha-896148 event: Registered Node ha-896148 in Controller
	  Normal   Starting                 117s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 117s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  117s (x8 over 117s)    kubelet          Node ha-896148 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    117s (x8 over 117s)    kubelet          Node ha-896148 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     117s (x7 over 117s)    kubelet          Node ha-896148 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           86s                    node-controller  Node ha-896148 event: Registered Node ha-896148 in Controller
	  Normal   RegisteredNode           31s                    node-controller  Node ha-896148 event: Registered Node ha-896148 in Controller
	  Normal   NodeNotReady             5s                     node-controller  Node ha-896148 status is now: NodeNotReady
	
	
	Name:               ha-896148-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-896148-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=ha-896148
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T18_08_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:08:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-896148-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:16:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:15:36 +0000   Mon, 19 Aug 2024 18:08:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:15:36 +0000   Mon, 19 Aug 2024 18:08:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:15:36 +0000   Mon, 19 Aug 2024 18:08:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:15:36 +0000   Mon, 19 Aug 2024 18:09:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-896148-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f144a8fce0f47d4965e6668895c9260
	  System UUID:                950e8d0f-8d18-445d-a594-8f18dc2f0088
	  Boot ID:                    78fba809-e96d-46e8-9b80-0c45215ddcd4
	  Kernel Version:             5.15.0-1066-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hb5l6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 etcd-ha-896148-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m13s
	  kube-system                 kindnet-l5v7t                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m15s
	  kube-system                 kube-apiserver-ha-896148-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m13s
	  kube-system                 kube-controller-manager-ha-896148-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m9s
	  kube-system                 kube-proxy-9g56n                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m15s
	  kube-system                 kube-scheduler-ha-896148-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 kube-vip-ha-896148-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m12s                  kube-proxy       
	  Normal   Starting                 63s                    kube-proxy       
	  Normal   Starting                 4m12s                  kube-proxy       
	  Normal   Starting                 5m48s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  8m15s (x8 over 8m15s)  kubelet          Node ha-896148-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m15s (x8 over 8m15s)  kubelet          Node ha-896148-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m15s (x7 over 8m15s)  kubelet          Node ha-896148-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m13s                  node-controller  Node ha-896148-m02 event: Registered Node ha-896148-m02 in Controller
	  Normal   RegisteredNode           8m7s                   node-controller  Node ha-896148-m02 event: Registered Node ha-896148-m02 in Controller
	  Normal   RegisteredNode           7m30s                  node-controller  Node ha-896148-m02 event: Registered Node ha-896148-m02 in Controller
	  Normal   NodeHasSufficientPID     6m9s (x7 over 6m9s)    kubelet          Node ha-896148-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m9s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m9s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m9s (x8 over 6m9s)    kubelet          Node ha-896148-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m9s (x8 over 6m9s)    kubelet          Node ha-896148-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           5m43s                  node-controller  Node ha-896148-m02 event: Registered Node ha-896148-m02 in Controller
	  Warning  CgroupV1                 5m1s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 5m1s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m (x8 over 5m1s)      kubelet          Node ha-896148-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m (x8 over 5m1s)      kubelet          Node ha-896148-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m (x7 over 5m1s)      kubelet          Node ha-896148-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m26s                  node-controller  Node ha-896148-m02 event: Registered Node ha-896148-m02 in Controller
	  Normal   RegisteredNode           4m23s                  node-controller  Node ha-896148-m02 event: Registered Node ha-896148-m02 in Controller
	  Normal   RegisteredNode           3m50s                  node-controller  Node ha-896148-m02 event: Registered Node ha-896148-m02 in Controller
	  Normal   Starting                 115s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 115s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  115s (x8 over 115s)    kubelet          Node ha-896148-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    115s (x8 over 115s)    kubelet          Node ha-896148-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     115s (x7 over 115s)    kubelet          Node ha-896148-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           86s                    node-controller  Node ha-896148-m02 event: Registered Node ha-896148-m02 in Controller
	  Normal   RegisteredNode           31s                    node-controller  Node ha-896148-m02 event: Registered Node ha-896148-m02 in Controller
	
	
	Name:               ha-896148-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-896148-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9c2db9d51ec33b5c53a86e9ba3d384ee332e3411
	                    minikube.k8s.io/name=ha-896148
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_19T18_10_03_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 18:10:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-896148-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 18:16:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 18:16:37 +0000   Mon, 19 Aug 2024 18:16:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 18:16:37 +0000   Mon, 19 Aug 2024 18:16:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 18:16:37 +0000   Mon, 19 Aug 2024 18:16:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 18:16:37 +0000   Mon, 19 Aug 2024 18:16:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-896148-m04
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 2f11cfdbb5b1426a8981e10b1ed9b011
	  System UUID:                a1773bb3-ef54-44d0-95d5-2cf4a21c0d69
	  Boot ID:                    78fba809-e96d-46e8-9b80-0c45215ddcd4
	  Kernel Version:             5.15.0-1066-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jgl78    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m49s
	  kube-system                 kindnet-55rn2              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m
	  kube-system                 kube-proxy-8xdhg           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m52s                  kube-proxy       
	  Normal   Starting                 7s                     kube-proxy       
	  Normal   Starting                 6m57s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    7m (x2 over 7m)        kubelet          Node ha-896148-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m (x2 over 7m)        kubelet          Node ha-896148-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  7m (x2 over 7m)        kubelet          Node ha-896148-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           6m58s                  node-controller  Node ha-896148-m04 event: Registered Node ha-896148-m04 in Controller
	  Normal   RegisteredNode           6m57s                  node-controller  Node ha-896148-m04 event: Registered Node ha-896148-m04 in Controller
	  Normal   RegisteredNode           6m55s                  node-controller  Node ha-896148-m04 event: Registered Node ha-896148-m04 in Controller
	  Normal   NodeReady                6m45s                  kubelet          Node ha-896148-m04 status is now: NodeReady
	  Normal   RegisteredNode           5m43s                  node-controller  Node ha-896148-m04 event: Registered Node ha-896148-m04 in Controller
	  Normal   RegisteredNode           4m26s                  node-controller  Node ha-896148-m04 event: Registered Node ha-896148-m04 in Controller
	  Normal   RegisteredNode           4m23s                  node-controller  Node ha-896148-m04 event: Registered Node ha-896148-m04 in Controller
	  Normal   RegisteredNode           3m50s                  node-controller  Node ha-896148-m04 event: Registered Node ha-896148-m04 in Controller
	  Normal   NodeNotReady             3m46s                  node-controller  Node ha-896148-m04 status is now: NodeNotReady
	  Normal   Starting                 3m24s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m24s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     3m18s (x7 over 3m24s)  kubelet          Node ha-896148-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    3m12s (x8 over 3m24s)  kubelet          Node ha-896148-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  3m12s (x8 over 3m24s)  kubelet          Node ha-896148-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           86s                    node-controller  Node ha-896148-m04 event: Registered Node ha-896148-m04 in Controller
	  Normal   NodeNotReady             46s                    node-controller  Node ha-896148-m04 status is now: NodeNotReady
	  Normal   Starting                 38s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 38s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     32s (x7 over 38s)      kubelet          Node ha-896148-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                    node-controller  Node ha-896148-m04 event: Registered Node ha-896148-m04 in Controller
	  Normal   NodeHasNoDiskPressure    25s (x8 over 38s)      kubelet          Node ha-896148-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  25s (x8 over 38s)      kubelet          Node ha-896148-m04 status is now: NodeHasSufficientMemory
	
	
	==> dmesg <==
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-066a03256465
	[  +0.000002] ll header: 00000000: 02 42 8b 71 e4 0d 02 42 c0 a8 31 02 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-066a03256465
	[  +0.000003] ll header: 00000000: 02 42 8b 71 e4 0d 02 42 c0 a8 31 02 08 00
	[  +0.992894] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-066a03256465
	[  +0.000006] ll header: 00000000: 02 42 8b 71 e4 0d 02 42 c0 a8 31 02 08 00
	[  +0.003955] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-066a03256465
	[  +0.000005] ll header: 00000000: 02 42 8b 71 e4 0d 02 42 c0 a8 31 02 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-066a03256465
	[  +0.000003] ll header: 00000000: 02 42 8b 71 e4 0d 02 42 c0 a8 31 02 08 00
	[  +0.028023] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-066a03256465
	[  +0.000012] ll header: 00000000: 02 42 8b 71 e4 0d 02 42 c0 a8 31 02 08 00
	[  +6.079401] net_ratelimit: 6 callbacks suppressed
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-066a03256465
	[  +0.000006] ll header: 00000000: 02 42 8b 71 e4 0d 02 42 c0 a8 31 02 08 00
	[  +0.003987] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-066a03256465
	[  +0.000006] ll header: 00000000: 02 42 8b 71 e4 0d 02 42 c0 a8 31 02 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-066a03256465
	[  +0.000004] ll header: 00000000: 02 42 8b 71 e4 0d 02 42 c0 a8 31 02 08 00
	[Aug19 18:16] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-066a03256465
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-066a03256465
	[  +0.000007] ll header: 00000000: 02 42 8b 71 e4 0d 02 42 c0 a8 31 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 8b 71 e4 0d 02 42 c0 a8 31 02 08 00
	[  +0.000011] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-066a03256465
	[  +0.000004] ll header: 00000000: 02 42 8b 71 e4 0d 02 42 c0 a8 31 02 08 00
	
	
	==> etcd [120c1d6c3236d8203cbc8d1c58b1ab8c94dfd16f55008765790f000010d42d36] <==
	{"level":"warn","ts":"2024-08-19T18:15:32.357808Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:28.904843Z","time spent":"3.452956327s","remote":"127.0.0.1:56400","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":42,"response size":9241,"request content":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-19T18:15:32.357548Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:28.904404Z","time spent":"3.453133757s","remote":"127.0.0.1:56436","response type":"/etcdserverpb.KV/Range","request count":0,"request size":77,"response count":0,"response size":29,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-19T18:15:32.357859Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:28.905330Z","time spent":"3.452523088s","remote":"127.0.0.1:56500","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":0,"response size":29,"request content":"key:\"/registry/ingress/\" range_end:\"/registry/ingress0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-19T18:15:32.357372Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:28.795152Z","time spent":"3.56218385s","remote":"127.0.0.1:56528","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":29,"request content":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" limit:500 "}
	{"level":"warn","ts":"2024-08-19T18:15:32.357764Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:28.905595Z","time spent":"3.452156436s","remote":"127.0.0.1:56380","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":29,"response size":155204,"request content":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-19T18:15:32.357756Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:28.904404Z","time spent":"3.453342214s","remote":"127.0.0.1:56346","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":0,"response size":29,"request content":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-19T18:15:32.357783Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:28.905318Z","time spent":"3.452448404s","remote":"127.0.0.1:56746","response type":"/etcdserverpb.KV/Range","request count":0,"request size":97,"response count":21,"response size":20214,"request content":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" limit:10000 "}
	{"level":"info","ts":"2024-08-19T18:15:32.357784Z","caller":"traceutil/trace.go:171","msg":"trace[1112256548] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:8; response_revision:2281; }","duration":"3.646987147s","start":"2024-08-19T18:15:28.710785Z","end":"2024-08-19T18:15:32.357772Z","steps":["trace[1112256548] 'agreement among raft nodes before linearized reading'  (duration: 3.571796255s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:15:32.359088Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:28.710747Z","time spent":"3.648322893s","remote":"127.0.0.1:56652","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":8,"response size":5443,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:500 "}
	{"level":"info","ts":"2024-08-19T18:15:32.357809Z","caller":"traceutil/trace.go:171","msg":"trace[1688417476] range","detail":"{range_begin:/registry/leases/kube-system/apiserver-fu54rlin7hcavbdjw6das4nv2e; range_end:; response_count:1; response_revision:2281; }","duration":"4.249345159s","start":"2024-08-19T18:15:28.108457Z","end":"2024-08-19T18:15:32.357802Z","steps":["trace[1688417476] 'agreement among raft nodes before linearized reading'  (duration: 4.174128609s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-19T18:15:32.359285Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:28.108431Z","time spent":"4.250841056s","remote":"127.0.0.1:56462","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":711,"request content":"key:\"/registry/leases/kube-system/apiserver-fu54rlin7hcavbdjw6das4nv2e\" "}
	{"level":"warn","ts":"2024-08-19T18:15:32.357829Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:28.905531Z","time spent":"3.452268815s","remote":"127.0.0.1:56554","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":67,"response size":60344,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-19T18:15:32.357838Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:29.263545Z","time spent":"3.094279042s","remote":"127.0.0.1:56554","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":67,"response size":60344,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" limit:500 "}
	{"level":"warn","ts":"2024-08-19T18:15:32.357983Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:28.904959Z","time spent":"3.453011132s","remote":"127.0.0.1:56570","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":2,"response size":936,"request content":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-19T18:15:32.358007Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:28.904733Z","time spent":"3.453267681s","remote":"127.0.0.1:56486","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":29,"request content":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-19T18:15:32.358010Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:29.119827Z","time spent":"3.238172912s","remote":"127.0.0.1:56552","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":12,"response size":8719,"request content":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" limit:500 "}
	{"level":"warn","ts":"2024-08-19T18:15:32.358026Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:29.058103Z","time spent":"3.2999169s","remote":"127.0.0.1:56580","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":1016,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" limit:500 "}
	{"level":"warn","ts":"2024-08-19T18:15:32.358033Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:29.215039Z","time spent":"3.142984431s","remote":"127.0.0.1:56722","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":29,"request content":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:500 "}
	{"level":"warn","ts":"2024-08-19T18:15:32.358031Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:29.487637Z","time spent":"2.870383099s","remote":"127.0.0.1:56706","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":29,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:500 "}
	{"level":"warn","ts":"2024-08-19T18:15:32.358052Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:29.178471Z","time spent":"3.179574633s","remote":"127.0.0.1:56268","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":7,"response size":10973,"request content":"key:\"/registry/configmaps/kube-system/\" range_end:\"/registry/configmaps/kube-system0\" limit:500 "}
	{"level":"warn","ts":"2024-08-19T18:15:32.358055Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:28.904928Z","time spent":"3.453121335s","remote":"127.0.0.1:56454","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":0,"response size":29,"request content":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-19T18:15:32.358055Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:29.205636Z","time spent":"3.152407743s","remote":"127.0.0.1:56716","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":0,"response size":29,"request content":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" limit:500 "}
	{"level":"warn","ts":"2024-08-19T18:15:32.358076Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-19T18:15:28.904434Z","time spent":"3.453634534s","remote":"127.0.0.1:56728","response type":"/etcdserverpb.KV/Range","request count":0,"request size":95,"response count":0,"response size":29,"request content":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" limit:10000 "}
	{"level":"warn","ts":"2024-08-19T18:15:32.871098Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"749a06ed48aaa4cf","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-19T18:15:32.872430Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"749a06ed48aaa4cf","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	
	
	==> kernel <==
	 18:17:03 up  1:59,  0 users,  load average: 0.73, 1.16, 0.86
	Linux ha-896148 5.15.0-1066-gcp #74~20.04.1-Ubuntu SMP Fri Jul 26 09:28:41 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [20d1c0e622a4c06ddd673334f919f13188e662b2427894ccb25d556683011530] <==
	E0819 18:16:39.907776       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0819 18:16:39.909650       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 18:16:39.909679       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 18:16:41.157794       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:16:41.157842       1 main.go:299] handling current node
	I0819 18:16:41.157863       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0819 18:16:41.157869       1 main.go:322] Node ha-896148-m02 has CIDR [10.244.1.0/24] 
	I0819 18:16:41.158007       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0819 18:16:41.158017       1 main.go:322] Node ha-896148-m04 has CIDR [10.244.3.0/24] 
	I0819 18:16:51.158295       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:16:51.158335       1 main.go:299] handling current node
	I0819 18:16:51.158354       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0819 18:16:51.158361       1 main.go:322] Node ha-896148-m02 has CIDR [10.244.1.0/24] 
	I0819 18:16:51.158501       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0819 18:16:51.158517       1 main.go:322] Node ha-896148-m04 has CIDR [10.244.3.0/24] 
	W0819 18:16:58.655882       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 18:16:58.655922       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0819 18:16:59.768067       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 18:16:59.768101       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 18:17:01.158588       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 18:17:01.158636       1 main.go:299] handling current node
	I0819 18:17:01.158655       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0819 18:17:01.158663       1 main.go:322] Node ha-896148-m02 has CIDR [10.244.1.0/24] 
	I0819 18:17:01.158872       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0819 18:17:01.158887       1 main.go:322] Node ha-896148-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [54e722469e54edfbd4ac25deb2b9cfe6cb429a5feeea647cab2803a41d2d69e1] <==
	I0819 18:16:17.489520       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0819 18:16:17.489604       1 controller.go:142] Starting OpenAPI controller
	I0819 18:16:17.490540       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0819 18:16:17.513991       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0819 18:16:17.514026       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0819 18:16:17.514077       1 aggregator.go:171] initial CRD sync complete...
	I0819 18:16:17.514144       1 autoregister_controller.go:144] Starting autoregister controller
	I0819 18:16:17.514215       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0819 18:16:17.514248       1 cache.go:39] Caches are synced for autoregister controller
	I0819 18:16:17.589747       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 18:16:17.589764       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 18:16:17.589777       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 18:16:17.589785       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 18:16:17.590032       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 18:16:17.590110       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 18:16:17.590127       1 shared_informer.go:320] Caches are synced for configmaps
	I0819 18:16:17.590377       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 18:16:17.657367       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 18:16:17.657411       1 policy_source.go:224] refreshing policies
	I0819 18:16:17.660465       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 18:16:17.727851       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 18:16:18.493005       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0819 18:16:18.766271       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0819 18:16:18.767742       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 18:16:18.772863       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [da9fe261b2d02e0558bced4ecbab4f7c945c5bad471912c9ef41b3033e1d08b8] <==
	E0819 18:15:32.273691       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0819 18:15:32.274239       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0819 18:15:32.274756       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0819 18:15:32.275272       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0819 18:15:32.275755       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0819 18:15:32.276431       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: leader changed\"}: etcdserver: leader changed" logger="UnhandledError"
	I0819 18:15:32.366294       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0819 18:15:32.369881       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I0819 18:15:32.384736       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0819 18:15:32.392710       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0819 18:15:32.392729       1 policy_source.go:224] refreshing policies
	I0819 18:15:32.403485       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0819 18:15:32.403659       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0819 18:15:32.403676       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0819 18:15:32.403707       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0819 18:15:32.403741       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0819 18:15:32.406165       1 cache.go:39] Caches are synced for autoregister controller
	I0819 18:15:32.408142       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0819 18:15:32.413884       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0819 18:15:32.465837       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 18:15:32.471360       1 controller.go:615] quota admission added evaluator for: endpoints
	I0819 18:15:32.476778       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0819 18:15:32.480175       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0819 18:15:33.113875       1 shared_informer.go:320] Caches are synced for configmaps
	F0819 18:16:15.102963       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [5d43ad9c6e09af3745e00502a10d4c26d84a2477655680191f3b18bf6e21e4f1] <==
	I0819 18:15:51.003946       1 serving.go:386] Generated self-signed cert in-memory
	I0819 18:15:51.220024       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0819 18:15:51.220045       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:15:51.221727       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0819 18:15:51.221761       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0819 18:15:51.222049       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0819 18:15:51.222134       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0819 18:16:01.231090       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [b15dd256b126346b6b5b2e30c46c58672e6a3f4c94df759ef09c7dd1c1984e35] <==
	I0819 18:16:31.831755       1 shared_informer.go:320] Caches are synced for garbage collector
	I0819 18:16:31.831783       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0819 18:16:37.309117       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-896148-m04"
	I0819 18:16:37.309220       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-896148-m04"
	I0819 18:16:37.320353       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-896148-m04"
	I0819 18:16:41.305119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-896148-m04"
	I0819 18:16:42.541211       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="83.188µs"
	I0819 18:16:43.615293       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="63.203µs"
	I0819 18:16:55.643472       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.807348ms"
	I0819 18:16:55.643586       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.217µs"
	I0819 18:16:57.013298       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-896148"
	I0819 18:16:57.013458       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-896148-m04"
	I0819 18:16:57.024984       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-896148"
	I0819 18:16:57.076872       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.304974ms"
	I0819 18:16:57.076961       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.448µs"
	I0819 18:17:01.424361       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-896148"
	I0819 18:17:02.165277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-896148"
	I0819 18:17:02.760290       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-t7rbx EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-t7rbx\": the object has been modified; please apply your changes to the latest version and try again"
	I0819 18:17:02.760477       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"d9435dea-82a6-474c-a280-93d6fc088a2b", APIVersion:"v1", ResourceVersion:"292", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-t7rbx EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-t7rbx": the object has been modified; please apply your changes to the latest version and try again
	I0819 18:17:02.764561       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="34.425048ms"
	I0819 18:17:02.764865       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="159.267µs"
	I0819 18:17:02.790764       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-t7rbx EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-t7rbx\": the object has been modified; please apply your changes to the latest version and try again"
	I0819 18:17:02.791262       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"d9435dea-82a6-474c-a280-93d6fc088a2b", APIVersion:"v1", ResourceVersion:"292", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-t7rbx EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-t7rbx": the object has been modified; please apply your changes to the latest version and try again
	I0819 18:17:02.804707       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="24.663674ms"
	I0819 18:17:02.804847       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="92.982µs"
	
	
	==> kube-proxy [bc2f3e710549fcee55548fcfa07627737ed0be613780014a066e7683a2b17c05] <==
	I0819 18:15:50.834405       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0819 18:15:50.834464       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 18:15:50.863203       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0819 18:15:50.863245       1 server_linux.go:169] "Using iptables Proxier"
	I0819 18:15:50.864996       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 18:15:50.865439       1 server.go:483] "Version info" version="v1.31.0"
	I0819 18:15:50.865467       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 18:15:50.866421       1 config.go:197] "Starting service config controller"
	I0819 18:15:50.866466       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 18:15:50.866487       1 config.go:104] "Starting endpoint slice config controller"
	I0819 18:15:50.866500       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 18:15:50.867052       1 config.go:326] "Starting node config controller"
	I0819 18:15:50.867193       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 18:15:50.967542       1 shared_informer.go:320] Caches are synced for node config
	I0819 18:15:50.967606       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 18:15:50.967701       1 shared_informer.go:320] Caches are synced for service config
	E0819 18:16:17.580532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io) - error from a previous attempt: read tcp 192.168.49.254:46686->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 18:16:17.580737       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.254:46698->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 18:16:17.580810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.254:46680->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	W0819 18:17:02.583400       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-896148&resourceVersion=2477": http2: client connection lost
	W0819 18:17:02.583423       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2544": http2: client connection lost
	E0819 18:17:02.583456       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-896148&resourceVersion=2477\": http2: client connection lost" logger="UnhandledError"
	W0819 18:17:02.583399       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2467": http2: client connection lost
	E0819 18:17:02.583466       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2544\": http2: client connection lost" logger="UnhandledError"
	E0819 18:17:02.583485       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2467\": http2: client connection lost" logger="UnhandledError"
	
	
	==> kube-scheduler [c0164f5ef60b448f848a931cd02b8c3b6e6ac9f11291ce48ff8030c5eaf71376] <==
	W0819 18:15:31.040342       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 18:15:31.040392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:15:31.313010       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 18:15:31.313058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 18:15:31.477974       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 18:15:31.478015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 18:15:31.582866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 18:15:31.582908       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 18:15:31.904644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 18:15:31.904694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 18:15:33.674324       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0819 18:16:17.509077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:51870->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 18:16:17.509328       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:51918->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 18:16:17.509399       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:51912->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 18:16:17.509429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:51852->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 18:16:17.509467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:51838->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 18:16:17.509541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:51932->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 18:16:17.509640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:51858->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 18:16:17.509691       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:51856->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 18:16:17.509706       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:51822->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 18:16:17.509764       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:51802->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 18:16:17.509781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:51892->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 18:16:17.509879       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:51880->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 18:16:17.509955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:51890->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0819 18:16:17.510058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:51810->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	
	
	==> kubelet <==
	Aug 19 18:16:45 ha-896148 kubelet[843]: E0819 18:16:45.888537     843 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091405888307339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156832,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:16:55 ha-896148 kubelet[843]: E0819 18:16:55.190019     843 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-896148?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Aug 19 18:16:55 ha-896148 kubelet[843]: E0819 18:16:55.889740     843 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091415889484102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156832,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:16:55 ha-896148 kubelet[843]: E0819 18:16:55.889782     843 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724091415889484102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156832,},InodesUsed:&UInt64Value{Value:75,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 19 18:17:02 ha-896148 kubelet[843]: W0819 18:17:02.686305     843 reflector.go:561] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-896148&resourceVersion=2590": http2: client connection lost
	Aug 19 18:17:02 ha-896148 kubelet[843]: W0819 18:17:02.686338     843 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2477": http2: client connection lost
	Aug 19 18:17:02 ha-896148 kubelet[843]: E0819 18:17:02.686379     843 reflector.go:158] "Unhandled Error" err="pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dha-896148&resourceVersion=2590\": http2: client connection lost" logger="UnhandledError"
	Aug 19 18:17:02 ha-896148 kubelet[843]: W0819 18:17:02.686360     843 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2281": http2: client connection lost
	Aug 19 18:17:02 ha-896148 kubelet[843]: I0819 18:17:02.686301     843 status_manager.go:851] "Failed to get status for pod" podUID="b1040e071f926da96c8cc1bdb0d58160" pod="kube-system/kube-vip-ha-896148" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-896148\": http2: client connection lost"
	Aug 19 18:17:02 ha-896148 kubelet[843]: W0819 18:17:02.686386     843 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2281": http2: client connection lost
	Aug 19 18:17:02 ha-896148 kubelet[843]: E0819 18:17:02.686436     843 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2281\": http2: client connection lost" logger="UnhandledError"
	Aug 19 18:17:02 ha-896148 kubelet[843]: W0819 18:17:02.686340     843 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2477": http2: client connection lost
	Aug 19 18:17:02 ha-896148 kubelet[843]: W0819 18:17:02.686448     843 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=2477": http2: client connection lost
	Aug 19 18:17:02 ha-896148 kubelet[843]: E0819 18:17:02.686450     843 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2281\": http2: client connection lost" logger="UnhandledError"
	Aug 19 18:17:02 ha-896148 kubelet[843]: W0819 18:17:02.686397     843 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-896148&resourceVersion=2477": http2: client connection lost
	Aug 19 18:17:02 ha-896148 kubelet[843]: E0819 18:17:02.686464     843 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2477\": http2: client connection lost" logger="UnhandledError"
	Aug 19 18:17:02 ha-896148 kubelet[843]: E0819 18:17:02.686396     843 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-896148?timeout=10s\": http2: client connection lost"
	Aug 19 18:17:02 ha-896148 kubelet[843]: W0819 18:17:02.686450     843 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2281": http2: client connection lost
	Aug 19 18:17:02 ha-896148 kubelet[843]: E0819 18:17:02.686488     843 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-896148&resourceVersion=2477\": http2: client connection lost" logger="UnhandledError"
	Aug 19 18:17:02 ha-896148 kubelet[843]: E0819 18:17:02.686303     843 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-896148.17ed33e9f7e8eaf4\": http2: client connection lost" event="&Event{ObjectMeta:{kube-apiserver-ha-896148.17ed33e9f7e8eaf4  kube-system   2401 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-896148,UID:fa0d9d6e97a788919810a2a5000d23e9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.31.0\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-896148,},FirstTimestamp:2024-08-19 18:15:12 +0000 UTC,LastTimestamp:2024-08-19 18:16:16.018726034 +0000 UTC m=+70.224624418,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Act
ion:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-896148,}"
	Aug 19 18:17:02 ha-896148 kubelet[843]: E0819 18:17:02.686518     843 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2281\": http2: client connection lost" logger="UnhandledError"
	Aug 19 18:17:02 ha-896148 kubelet[843]: W0819 18:17:02.686302     843 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2477": http2: client connection lost
	Aug 19 18:17:02 ha-896148 kubelet[843]: E0819 18:17:02.686553     843 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2477\": http2: client connection lost" logger="UnhandledError"
	Aug 19 18:17:02 ha-896148 kubelet[843]: E0819 18:17:02.686392     843 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2477\": http2: client connection lost" logger="UnhandledError"
	Aug 19 18:17:02 ha-896148 kubelet[843]: E0819 18:17:02.686473     843 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=2477\": http2: client connection lost" logger="UnhandledError"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-896148 -n ha-896148
helpers_test.go:261: (dbg) Run:  kubectl --context ha-896148 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (124.61s)

                                                
                                    

Test pass (300/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 5.11
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 4.83
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.19
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 1.03
21 TestBinaryMirror 0.72
22 TestOffline 56.11
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 174.07
31 TestAddons/serial/GCPAuth/Namespaces 0.14
33 TestAddons/parallel/Registry 13.72
35 TestAddons/parallel/InspektorGadget 11.63
37 TestAddons/parallel/HelmTiller 8.97
39 TestAddons/parallel/CSI 52.63
40 TestAddons/parallel/Headlamp 16.67
41 TestAddons/parallel/CloudSpanner 5.82
42 TestAddons/parallel/LocalPath 8.04
43 TestAddons/parallel/NvidiaDevicePlugin 5.42
44 TestAddons/parallel/Yakd 10.6
45 TestAddons/StoppedEnableDisable 12.05
46 TestCertOptions 31.51
47 TestCertExpiration 238.77
49 TestForceSystemdFlag 26.49
50 TestForceSystemdEnv 36.52
52 TestKVMDriverInstallOrUpdate 1.25
56 TestErrorSpam/setup 23.08
57 TestErrorSpam/start 0.55
58 TestErrorSpam/status 0.82
59 TestErrorSpam/pause 1.44
60 TestErrorSpam/unpause 1.55
61 TestErrorSpam/stop 1.32
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 41.13
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 21.65
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.89
73 TestFunctional/serial/CacheCmd/cache/add_local 0.94
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.6
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
81 TestFunctional/serial/ExtraConfig 27.89
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.25
84 TestFunctional/serial/LogsFileCmd 1.26
85 TestFunctional/serial/InvalidService 4.25
87 TestFunctional/parallel/ConfigCmd 0.33
88 TestFunctional/parallel/DashboardCmd 8.55
89 TestFunctional/parallel/DryRun 0.32
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 0.87
95 TestFunctional/parallel/ServiceCmdConnect 13.66
96 TestFunctional/parallel/AddonsCmd 0.11
97 TestFunctional/parallel/PersistentVolumeClaim 28.94
99 TestFunctional/parallel/SSHCmd 0.75
100 TestFunctional/parallel/CpCmd 1.66
101 TestFunctional/parallel/MySQL 19.64
102 TestFunctional/parallel/FileSync 0.3
103 TestFunctional/parallel/CertSync 1.82
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
111 TestFunctional/parallel/License 0.15
112 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
113 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
114 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
115 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
116 TestFunctional/parallel/ImageCommands/ImageBuild 3.97
117 TestFunctional/parallel/ImageCommands/Setup 0.55
118 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
119 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
120 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.36
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.43
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 17.28
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.26
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.37
130 TestFunctional/parallel/ImageCommands/ImageRemove 1.06
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.67
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.85
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
139 TestFunctional/parallel/ServiceCmd/DeployApp 7.13
140 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
141 TestFunctional/parallel/ProfileCmd/profile_list 0.31
142 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
143 TestFunctional/parallel/MountCmd/any-port 5.38
144 TestFunctional/parallel/MountCmd/specific-port 1.79
145 TestFunctional/parallel/ServiceCmd/List 0.89
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.89
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.67
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
149 TestFunctional/parallel/ServiceCmd/Format 0.53
150 TestFunctional/parallel/ServiceCmd/URL 0.55
151 TestFunctional/parallel/Version/short 0.05
152 TestFunctional/parallel/Version/components 0.72
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 97.32
160 TestMultiControlPlane/serial/DeployApp 3.74
161 TestMultiControlPlane/serial/PingHostFromPods 0.96
162 TestMultiControlPlane/serial/AddWorkerNode 31.76
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.6
165 TestMultiControlPlane/serial/CopyFile 15.01
166 TestMultiControlPlane/serial/StopSecondaryNode 12.4
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.44
168 TestMultiControlPlane/serial/RestartSecondaryNode 22.72
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 2.34
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 175.12
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.18
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.42
173 TestMultiControlPlane/serial/StopCluster 35.34
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.43
176 TestMultiControlPlane/serial/AddSecondaryNode 41.04
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.6
181 TestJSONOutput/start/Command 39.53
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.65
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.55
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.7
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.19
206 TestKicCustomNetwork/create_custom_network 26.36
207 TestKicCustomNetwork/use_default_bridge_network 23.59
208 TestKicExistingNetwork 25.33
209 TestKicCustomSubnet 22.82
210 TestKicStaticIP 25.34
211 TestMainNoArgs 0.04
212 TestMinikubeProfile 50.61
215 TestMountStart/serial/StartWithMountFirst 5.13
216 TestMountStart/serial/VerifyMountFirst 0.23
217 TestMountStart/serial/StartWithMountSecond 5.24
218 TestMountStart/serial/VerifyMountSecond 0.22
219 TestMountStart/serial/DeleteFirst 1.58
220 TestMountStart/serial/VerifyMountPostDelete 0.22
221 TestMountStart/serial/Stop 1.16
222 TestMountStart/serial/RestartStopped 7.29
223 TestMountStart/serial/VerifyMountPostStop 0.23
226 TestMultiNode/serial/FreshStart2Nodes 64.99
227 TestMultiNode/serial/DeployApp2Nodes 2.88
228 TestMultiNode/serial/PingHostFrom2Pods 0.66
229 TestMultiNode/serial/AddNode 29.24
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.27
232 TestMultiNode/serial/CopyFile 8.4
233 TestMultiNode/serial/StopNode 2.02
234 TestMultiNode/serial/StartAfterStop 8.68
235 TestMultiNode/serial/RestartKeepsNodes 92.79
236 TestMultiNode/serial/DeleteNode 5.14
237 TestMultiNode/serial/StopMultiNode 23.58
238 TestMultiNode/serial/RestartMultiNode 46.17
239 TestMultiNode/serial/ValidateNameConflict 23.13
244 TestPreload 103.31
246 TestScheduledStopUnix 96.25
249 TestInsufficientStorage 12.43
250 TestRunningBinaryUpgrade 51.06
252 TestKubernetesUpgrade 344.46
253 TestMissingContainerUpgrade 119.33
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
259 TestNoKubernetes/serial/StartWithK8s 33.5
264 TestNetworkPlugins/group/false 7.63
268 TestNoKubernetes/serial/StartWithStopK8s 8
269 TestNoKubernetes/serial/Start 7.54
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
271 TestNoKubernetes/serial/ProfileList 1.45
272 TestNoKubernetes/serial/Stop 1.19
273 TestNoKubernetes/serial/StartNoArgs 8.97
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
275 TestStoppedBinaryUpgrade/Setup 0.5
276 TestStoppedBinaryUpgrade/Upgrade 88.38
277 TestStoppedBinaryUpgrade/MinikubeLogs 0.8
279 TestPause/serial/Start 43.33
280 TestPause/serial/SecondStartNoReconfiguration 33.6
288 TestNetworkPlugins/group/auto/Start 43.91
289 TestPause/serial/Pause 0.75
290 TestPause/serial/VerifyStatus 0.33
291 TestPause/serial/Unpause 0.64
292 TestPause/serial/PauseAgain 0.76
293 TestPause/serial/DeletePaused 3.71
294 TestPause/serial/VerifyDeletedResources 0.54
295 TestNetworkPlugins/group/kindnet/Start 42.56
296 TestNetworkPlugins/group/calico/Start 52.23
297 TestNetworkPlugins/group/auto/KubeletFlags 0.24
298 TestNetworkPlugins/group/auto/NetCatPod 9.22
299 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
300 TestNetworkPlugins/group/auto/DNS 0.13
301 TestNetworkPlugins/group/auto/Localhost 0.11
302 TestNetworkPlugins/group/auto/HairPin 0.11
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
304 TestNetworkPlugins/group/kindnet/NetCatPod 10.22
305 TestNetworkPlugins/group/kindnet/DNS 0.13
306 TestNetworkPlugins/group/kindnet/Localhost 0.1
307 TestNetworkPlugins/group/kindnet/HairPin 0.11
308 TestNetworkPlugins/group/custom-flannel/Start 44.54
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.26
311 TestNetworkPlugins/group/calico/NetCatPod 11.22
312 TestNetworkPlugins/group/enable-default-cni/Start 60
313 TestNetworkPlugins/group/calico/DNS 0.16
314 TestNetworkPlugins/group/calico/Localhost 0.13
315 TestNetworkPlugins/group/calico/HairPin 0.12
316 TestNetworkPlugins/group/flannel/Start 47.63
317 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
318 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.21
319 TestNetworkPlugins/group/custom-flannel/DNS 0.13
320 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
321 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
322 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
323 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.2
324 TestNetworkPlugins/group/bridge/Start 62.48
325 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
328 TestNetworkPlugins/group/flannel/ControllerPod 6.01
329 TestNetworkPlugins/group/flannel/KubeletFlags 0.36
330 TestNetworkPlugins/group/flannel/NetCatPod 10.31
331 TestNetworkPlugins/group/flannel/DNS 0.13
332 TestNetworkPlugins/group/flannel/Localhost 0.12
333 TestNetworkPlugins/group/flannel/HairPin 0.13
335 TestStartStop/group/old-k8s-version/serial/FirstStart 142.14
337 TestStartStop/group/no-preload/serial/FirstStart 57.23
339 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.1
340 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
341 TestNetworkPlugins/group/bridge/NetCatPod 9.22
342 TestNetworkPlugins/group/bridge/DNS 0.15
343 TestNetworkPlugins/group/bridge/Localhost 0.13
344 TestNetworkPlugins/group/bridge/HairPin 0.13
346 TestStartStop/group/newest-cni/serial/FirstStart 25
347 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.26
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.84
349 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.91
350 TestStartStop/group/no-preload/serial/DeployApp 9.26
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
352 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 262.16
353 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.9
354 TestStartStop/group/no-preload/serial/Stop 11.91
355 TestStartStop/group/newest-cni/serial/DeployApp 0
356 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.06
357 TestStartStop/group/newest-cni/serial/Stop 1.19
358 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
359 TestStartStop/group/newest-cni/serial/SecondStart 13
360 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
361 TestStartStop/group/no-preload/serial/SecondStart 263.56
362 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
363 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
364 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
365 TestStartStop/group/newest-cni/serial/Pause 2.95
367 TestStartStop/group/embed-certs/serial/FirstStart 41.23
368 TestStartStop/group/old-k8s-version/serial/DeployApp 8.35
369 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.79
370 TestStartStop/group/embed-certs/serial/DeployApp 9.23
371 TestStartStop/group/old-k8s-version/serial/Stop 11.94
372 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.78
373 TestStartStop/group/embed-certs/serial/Stop 14.47
374 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
375 TestStartStop/group/old-k8s-version/serial/SecondStart 139.88
376 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
377 TestStartStop/group/embed-certs/serial/SecondStart 262.01
378 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
379 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
380 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
381 TestStartStop/group/old-k8s-version/serial/Pause 2.44
382 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
384 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
385 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.51
386 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
387 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
388 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
389 TestStartStop/group/no-preload/serial/Pause 2.43
390 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
391 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
392 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
393 TestStartStop/group/embed-certs/serial/Pause 2.4
x
+
TestDownloadOnly/v1.20.0/json-events (5.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-550969 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-550969 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.107913002s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (5.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-550969
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-550969: exit status 85 (57.042438ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-550969 | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC |          |
	|         | -p download-only-550969        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:56:35
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:56:35.038034   30977 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:56:35.038160   30977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:56:35.038170   30977 out.go:358] Setting ErrFile to fd 2...
	I0819 17:56:35.038176   30977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:56:35.038337   30977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-24160/.minikube/bin
	W0819 17:56:35.038448   30977 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19468-24160/.minikube/config/config.json: open /home/jenkins/minikube-integration/19468-24160/.minikube/config/config.json: no such file or directory
	I0819 17:56:35.039017   30977 out.go:352] Setting JSON to true
	I0819 17:56:35.039857   30977 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5945,"bootTime":1724084250,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 17:56:35.039907   30977 start.go:139] virtualization: kvm guest
	I0819 17:56:35.042425   30977 out.go:97] [download-only-550969] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0819 17:56:35.042538   30977 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19468-24160/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 17:56:35.042604   30977 notify.go:220] Checking for updates...
	I0819 17:56:35.044031   30977 out.go:169] MINIKUBE_LOCATION=19468
	I0819 17:56:35.045393   30977 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:56:35.046704   30977 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19468-24160/kubeconfig
	I0819 17:56:35.048083   30977 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-24160/.minikube
	I0819 17:56:35.049630   30977 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0819 17:56:35.051990   30977 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 17:56:35.052196   30977 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:56:35.071729   30977 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 17:56:35.071825   30977 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:56:35.416186   30977 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-19 17:56:35.407302543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 17:56:35.416283   30977 docker.go:307] overlay module found
	I0819 17:56:35.417997   30977 out.go:97] Using the docker driver based on user configuration
	I0819 17:56:35.418017   30977 start.go:297] selected driver: docker
	I0819 17:56:35.418028   30977 start.go:901] validating driver "docker" against <nil>
	I0819 17:56:35.418104   30977 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:56:35.465247   30977 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-19 17:56:35.457090032 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 17:56:35.465460   30977 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 17:56:35.465953   30977 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0819 17:56:35.466114   30977 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 17:56:35.467807   30977 out.go:169] Using Docker driver with root privileges
	I0819 17:56:35.468837   30977 cni.go:84] Creating CNI manager for ""
	I0819 17:56:35.468860   30977 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0819 17:56:35.468874   30977 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 17:56:35.468942   30977 start.go:340] cluster config:
	{Name:download-only-550969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-550969 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:56:35.470093   30977 out.go:97] Starting "download-only-550969" primary control-plane node in "download-only-550969" cluster
	I0819 17:56:35.470113   30977 cache.go:121] Beginning downloading kic base image for docker with crio
	I0819 17:56:35.471109   30977 out.go:97] Pulling base image v0.0.44-1723740748-19452 ...
	I0819 17:56:35.471129   30977 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 17:56:35.471256   30977 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 17:56:35.485699   30977 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 17:56:35.485854   30977 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 17:56:35.485929   30977 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 17:56:35.496497   30977 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 17:56:35.496516   30977 cache.go:56] Caching tarball of preloaded images
	I0819 17:56:35.496615   30977 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 17:56:35.498030   30977 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 17:56:35.498043   30977 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 17:56:35.522446   30977 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19468-24160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0819 17:56:38.730361   30977 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 17:56:38.730433   30977 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19468-24160/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0819 17:56:38.754031   30977 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 17:56:39.636388   30977 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0819 17:56:39.636708   30977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/download-only-550969/config.json ...
	I0819 17:56:39.636734   30977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/download-only-550969/config.json: {Name:mk59609133e8dfb8d794f321923cc46b56b1c5fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:56:39.636896   30977 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0819 17:56:39.637057   30977 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19468-24160/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-550969 host does not exist
	  To start a cluster, run: "minikube start -p download-only-550969"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-550969
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (4.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-890684 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-890684 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.828754665s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (4.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-890684
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-890684: exit status 85 (55.185005ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-550969 | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC |                     |
	|         | -p download-only-550969        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	| delete  | -p download-only-550969        | download-only-550969 | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC | 19 Aug 24 17:56 UTC |
	| start   | -o=json --download-only        | download-only-890684 | jenkins | v1.33.1 | 19 Aug 24 17:56 UTC |                     |
	|         | -p download-only-890684        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:56:40
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:56:40.509754   31332 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:56:40.509837   31332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:56:40.509844   31332 out.go:358] Setting ErrFile to fd 2...
	I0819 17:56:40.509848   31332 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:56:40.509992   31332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-24160/.minikube/bin
	I0819 17:56:40.510486   31332 out.go:352] Setting JSON to true
	I0819 17:56:40.511316   31332 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5950,"bootTime":1724084250,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 17:56:40.511371   31332 start.go:139] virtualization: kvm guest
	I0819 17:56:40.513435   31332 out.go:97] [download-only-890684] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 17:56:40.513584   31332 notify.go:220] Checking for updates...
	I0819 17:56:40.514879   31332 out.go:169] MINIKUBE_LOCATION=19468
	I0819 17:56:40.516196   31332 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:56:40.517738   31332 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19468-24160/kubeconfig
	I0819 17:56:40.519048   31332 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-24160/.minikube
	I0819 17:56:40.520228   31332 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0819 17:56:40.522623   31332 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 17:56:40.522810   31332 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:56:40.545477   31332 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 17:56:40.545567   31332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:56:40.592453   31332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:46 SystemTime:2024-08-19 17:56:40.583720687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 17:56:40.592557   31332 docker.go:307] overlay module found
	I0819 17:56:40.594275   31332 out.go:97] Using the docker driver based on user configuration
	I0819 17:56:40.594305   31332 start.go:297] selected driver: docker
	I0819 17:56:40.594320   31332 start.go:901] validating driver "docker" against <nil>
	I0819 17:56:40.594417   31332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:56:40.638907   31332 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:46 SystemTime:2024-08-19 17:56:40.630536151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 17:56:40.639074   31332 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 17:56:40.639564   31332 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0819 17:56:40.639717   31332 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 17:56:40.641470   31332 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-890684 host does not exist
	  To start a cluster, run: "minikube start -p download-only-890684"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-890684
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.03s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-314754 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-314754" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-314754
--- PASS: TestDownloadOnlyKic (1.03s)

                                                
                                    
x
+
TestBinaryMirror (0.72s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-755146 --alsologtostderr --binary-mirror http://127.0.0.1:44393 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-755146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-755146
--- PASS: TestBinaryMirror (0.72s)

                                                
                                    
x
+
TestOffline (56.11s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-464724 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-464724 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (53.839680747s)
helpers_test.go:175: Cleaning up "offline-crio-464724" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-464724
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-464724: (2.272828123s)
--- PASS: TestOffline (56.11s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-142951
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-142951: exit status 85 (46.150333ms)

                                                
                                                
-- stdout --
	* Profile "addons-142951" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-142951"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-142951
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-142951: exit status 85 (45.186928ms)

                                                
                                                
-- stdout --
	* Profile "addons-142951" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-142951"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (174.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-142951 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-142951 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m54.068464457s)
--- PASS: TestAddons/Setup (174.07s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-142951 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-142951 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.370188ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-mflg4" [7a8a2fd6-50f4-4941-a77a-aa97fe6fde07] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002515767s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-cpszr" [4104108c-9aa8-4ddc-b4ab-13ffb2364b83] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00298364s
addons_test.go:342: (dbg) Run:  kubectl --context addons-142951 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-142951 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-142951 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.029733322s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-142951 ip
2024/08/19 18:00:10 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-142951 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.72s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.63s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2xbjm" [a3ce48bf-6d42-479f-97e4-f2abf3c478f0] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004219861s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-142951
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-142951: (5.628471346s)
--- PASS: TestAddons/parallel/InspektorGadget (11.63s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (8.97s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.043156ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-gjp98" [c259324f-94be-46a4-9f28-bb1278b517b6] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003458779s
addons_test.go:475: (dbg) Run:  kubectl --context addons-142951 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-142951 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.493318431s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-142951 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (8.97s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.611803ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-142951 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-142951 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2ba0ea8b-afab-41f0-b18c-d17c959b8322] Pending
helpers_test.go:344: "task-pv-pod" [2ba0ea8b-afab-41f0-b18c-d17c959b8322] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2ba0ea8b-afab-41f0-b18c-d17c959b8322] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003867196s
addons_test.go:590: (dbg) Run:  kubectl --context addons-142951 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-142951 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-142951 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-142951 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-142951 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-142951 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [42ecdc53-b0e7-460e-a62c-a7a5efb34086] Pending
helpers_test.go:344: "task-pv-pod-restore" [42ecdc53-b0e7-460e-a62c-a7a5efb34086] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [42ecdc53-b0e7-460e-a62c-a7a5efb34086] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.003345759s
addons_test.go:632: (dbg) Run:  kubectl --context addons-142951 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-142951 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-142951 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-142951 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-142951 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.44886747s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-142951 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.63s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-142951 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-142951 --alsologtostderr -v=1: (1.093134733s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-c588n" [5a8e7634-41d8-492c-8b3c-a7774d8228e0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-c588n" [5a8e7634-41d8-492c-8b3c-a7774d8228e0] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003922265s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-142951 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-142951 addons disable headlamp --alsologtostderr -v=1: (5.575340131s)
--- PASS: TestAddons/parallel/Headlamp (16.67s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-272bp" [7e498cce-8aa8-49d5-b0c9-ca63e22a20e2] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003278934s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-142951
--- PASS: TestAddons/parallel/CloudSpanner (5.82s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.04s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-142951 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-142951 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-142951 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [2df40a42-1c0b-4d3f-a93e-a11623400660] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [2df40a42-1c0b-4d3f-a93e-a11623400660] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [2df40a42-1c0b-4d3f-a93e-a11623400660] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00276164s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-142951 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-142951 ssh "cat /opt/local-path-provisioner/pvc-c78e1662-15f1-40c8-8ca4-6b6d5b18666a_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-142951 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-142951 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-142951 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.04s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.42s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-bc72h" [1afb0b8d-3754-410e-886b-723b6ec99725] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003688277s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-142951
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.42s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-z8cn8" [2210b1e7-9254-4472-8e0d-d995789424ea] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.018875173s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-142951 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-142951 addons disable yakd --alsologtostderr -v=1: (5.579985411s)
--- PASS: TestAddons/parallel/Yakd (10.60s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.05s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-142951
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-142951: (11.823227531s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-142951
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-142951
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-142951
--- PASS: TestAddons/StoppedEnableDisable (12.05s)

                                                
                                    
x
+
TestCertOptions (31.51s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-278398 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-278398 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.13549876s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-278398 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-278398 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-278398 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-278398" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-278398
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-278398: (7.702407222s)
--- PASS: TestCertOptions (31.51s)

                                                
                                    
x
+
TestCertExpiration (238.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-328922 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-328922 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (21.432367862s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-328922 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-328922 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (34.926092195s)
helpers_test.go:175: Cleaning up "cert-expiration-328922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-328922
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-328922: (2.411992811s)
--- PASS: TestCertExpiration (238.77s)

                                                
                                    
x
+
TestForceSystemdFlag (26.49s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-602609 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-602609 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.814544193s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-602609 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-602609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-602609
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-602609: (2.322377551s)
--- PASS: TestForceSystemdFlag (26.49s)

                                                
                                    
x
+
TestForceSystemdEnv (36.52s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-496583 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-496583 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.048289979s)
helpers_test.go:175: Cleaning up "force-systemd-env-496583" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-496583
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-496583: (2.469521822s)
--- PASS: TestForceSystemdEnv (36.52s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.25s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.25s)

                                                
                                    
x
+
TestErrorSpam/setup (23.08s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-883277 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-883277 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-883277 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-883277 --driver=docker  --container-runtime=crio: (23.082323963s)
--- PASS: TestErrorSpam/setup (23.08s)

                                                
                                    
x
+
TestErrorSpam/start (0.55s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-883277 --log_dir /tmp/nospam-883277 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-883277 --log_dir /tmp/nospam-883277 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-883277 --log_dir /tmp/nospam-883277 start --dry-run
--- PASS: TestErrorSpam/start (0.55s)

                                                
                                    
x
+
TestErrorSpam/status (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-883277 --log_dir /tmp/nospam-883277 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-883277 --log_dir /tmp/nospam-883277 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-883277 --log_dir /tmp/nospam-883277 status
--- PASS: TestErrorSpam/status (0.82s)

                                                
                                    
x
+
TestErrorSpam/pause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-883277 --log_dir /tmp/nospam-883277 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-883277 --log_dir /tmp/nospam-883277 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-883277 --log_dir /tmp/nospam-883277 pause
--- PASS: TestErrorSpam/pause (1.44s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-883277 --log_dir /tmp/nospam-883277 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-883277 --log_dir /tmp/nospam-883277 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-883277 --log_dir /tmp/nospam-883277 unpause
--- PASS: TestErrorSpam/unpause (1.55s)

                                                
                                    
x
+
TestErrorSpam/stop (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-883277 --log_dir /tmp/nospam-883277 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-883277 --log_dir /tmp/nospam-883277 stop: (1.157449848s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-883277 --log_dir /tmp/nospam-883277 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-883277 --log_dir /tmp/nospam-883277 stop
--- PASS: TestErrorSpam/stop (1.32s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19468-24160/.minikube/files/etc/test/nested/copy/30966/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-511891 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-511891 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (41.132711613s)
--- PASS: TestFunctional/serial/StartWithProxy (41.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (21.65s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-511891 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-511891 --alsologtostderr -v=8: (21.652189819s)
functional_test.go:663: soft start took 21.652868288s for "functional-511891" cluster.
--- PASS: TestFunctional/serial/SoftStart (21.65s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-511891 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-511891 cache add registry.k8s.io/pause:3.3: (1.050254771s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-511891 /tmp/TestFunctionalserialCacheCmdcacheadd_local1718237736/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 cache add minikube-local-cache-test:functional-511891
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 cache delete minikube-local-cache-test:functional-511891
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-511891
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-511891 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (252.163072ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 kubectl -- --context functional-511891 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-511891 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (27.89s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-511891 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-511891 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (27.885329091s)
functional_test.go:761: restart took 27.885457881s for "functional-511891" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (27.89s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-511891 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-511891 logs: (1.246467495s)
--- PASS: TestFunctional/serial/LogsCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 logs --file /tmp/TestFunctionalserialLogsFileCmd2827538971/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-511891 logs --file /tmp/TestFunctionalserialLogsFileCmd2827538971/001/logs.txt: (1.261921883s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-511891 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-511891
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-511891: exit status 115 (298.167114ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32661 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-511891 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.25s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-511891 config get cpus: exit status 14 (52.548174ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-511891 config get cpus: exit status 14 (60.087814ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-511891 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-511891 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 73097: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.55s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-511891 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-511891 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (144.485785ms)

                                                
                                                
-- stdout --
	* [functional-511891] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-24160/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-24160/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:07:55.951145   71971 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:07:55.951368   71971 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:07:55.951377   71971 out.go:358] Setting ErrFile to fd 2...
	I0819 18:07:55.951381   71971 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:07:55.951589   71971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-24160/.minikube/bin
	I0819 18:07:55.952077   71971 out.go:352] Setting JSON to false
	I0819 18:07:55.952980   71971 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6626,"bootTime":1724084250,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:07:55.953036   71971 start.go:139] virtualization: kvm guest
	I0819 18:07:55.955466   71971 out.go:177] * [functional-511891] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:07:55.957032   71971 notify.go:220] Checking for updates...
	I0819 18:07:55.957053   71971 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 18:07:55.958540   71971 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:07:55.960286   71971 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-24160/kubeconfig
	I0819 18:07:55.961691   71971 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-24160/.minikube
	I0819 18:07:55.963108   71971 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:07:55.965043   71971 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:07:55.966970   71971 config.go:182] Loaded profile config "functional-511891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:07:55.967583   71971 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:07:55.993030   71971 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 18:07:55.993235   71971 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 18:07:56.042214   71971 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-19 18:07:56.032428986 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 18:07:56.042309   71971 docker.go:307] overlay module found
	I0819 18:07:56.044025   71971 out.go:177] * Using the docker driver based on existing profile
	I0819 18:07:56.045393   71971 start.go:297] selected driver: docker
	I0819 18:07:56.045416   71971 start.go:901] validating driver "docker" against &{Name:functional-511891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-511891 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:07:56.045508   71971 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:07:56.047638   71971 out.go:201] 
	W0819 18:07:56.049305   71971 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0819 18:07:56.050813   71971 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-511891 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-511891 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-511891 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (137.995586ms)

                                                
                                                
-- stdout --
	* [functional-511891] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-24160/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-24160/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:07:55.811549   71886 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:07:55.811656   71886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:07:55.811665   71886 out.go:358] Setting ErrFile to fd 2...
	I0819 18:07:55.811669   71886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:07:55.811935   71886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-24160/.minikube/bin
	I0819 18:07:55.812418   71886 out.go:352] Setting JSON to false
	I0819 18:07:55.813452   71886 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6626,"bootTime":1724084250,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:07:55.813507   71886 start.go:139] virtualization: kvm guest
	I0819 18:07:55.815691   71886 out.go:177] * [functional-511891] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0819 18:07:55.816748   71886 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 18:07:55.816783   71886 notify.go:220] Checking for updates...
	I0819 18:07:55.819002   71886 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:07:55.820171   71886 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-24160/kubeconfig
	I0819 18:07:55.821307   71886 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-24160/.minikube
	I0819 18:07:55.822467   71886 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:07:55.823625   71886 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:07:55.825327   71886 config.go:182] Loaded profile config "functional-511891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:07:55.826018   71886 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:07:55.848625   71886 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 18:07:55.848774   71886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 18:07:55.897470   71886 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-19 18:07:55.886690937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 18:07:55.897605   71886 docker.go:307] overlay module found
	I0819 18:07:55.900054   71886 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0819 18:07:55.901282   71886 start.go:297] selected driver: docker
	I0819 18:07:55.901307   71886 start.go:901] validating driver "docker" against &{Name:functional-511891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-511891 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:07:55.901423   71886 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:07:55.903659   71886 out.go:201] 
	W0819 18:07:55.904893   71886 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0819 18:07:55.906012   71886 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-511891 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-511891 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-hbw92" [292fd5e2-0445-4efc-8ea3-2f85c9e64298] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-hbw92" [292fd5e2-0445-4efc-8ea3-2f85c9e64298] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.00392061s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31461
functional_test.go:1675: http://192.168.49.2:31461: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-hbw92

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31461
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [625af3a7-bce0-4eba-b50a-f0d6601c42e9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003832115s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-511891 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-511891 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-511891 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-511891 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [80dac9a3-109a-4ae2-8241-12530790c4f8] Pending
helpers_test.go:344: "sp-pod" [80dac9a3-109a-4ae2-8241-12530790c4f8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [80dac9a3-109a-4ae2-8241-12530790c4f8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.003127954s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-511891 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-511891 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-511891 delete -f testdata/storage-provisioner/pod.yaml: (1.16231922s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-511891 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fd5139db-2621-45a1-b7ef-f1110af25605] Pending
helpers_test.go:344: "sp-pod" [fd5139db-2621-45a1-b7ef-f1110af25605] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.01284514s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-511891 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.94s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh -n functional-511891 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 cp functional-511891:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3882952126/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh -n functional-511891 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh -n functional-511891 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-511891 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-54w5f" [e938e980-f2f4-4289-a1d8-0d5b9f6943f5] Pending
helpers_test.go:344: "mysql-6cdb49bbb-54w5f" [e938e980-f2f4-4289-a1d8-0d5b9f6943f5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-54w5f" [e938e980-f2f4-4289-a1d8-0d5b9f6943f5] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.034582817s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-511891 exec mysql-6cdb49bbb-54w5f -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-511891 exec mysql-6cdb49bbb-54w5f -- mysql -ppassword -e "show databases;": exit status 1 (224.235643ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-511891 exec mysql-6cdb49bbb-54w5f -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-511891 exec mysql-6cdb49bbb-54w5f -- mysql -ppassword -e "show databases;": exit status 1 (118.22569ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-511891 exec mysql-6cdb49bbb-54w5f -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.64s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/30966/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "sudo cat /etc/test/nested/copy/30966/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/30966.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "sudo cat /etc/ssl/certs/30966.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/30966.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "sudo cat /usr/share/ca-certificates/30966.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/309662.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "sudo cat /etc/ssl/certs/309662.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/309662.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "sudo cat /usr/share/ca-certificates/309662.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-511891 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-511891 ssh "sudo systemctl is-active docker": exit status 1 (275.271378ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-511891 ssh "sudo systemctl is-active containerd": exit status 1 (281.319792ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-511891 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-511891
localhost/kicbase/echo-server:functional-511891
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-511891 image ls --format short --alsologtostderr:
I0819 18:07:59.411201   74017 out.go:345] Setting OutFile to fd 1 ...
I0819 18:07:59.411445   74017 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:07:59.411453   74017 out.go:358] Setting ErrFile to fd 2...
I0819 18:07:59.411465   74017 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:07:59.411700   74017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-24160/.minikube/bin
I0819 18:07:59.412577   74017 config.go:182] Loaded profile config "functional-511891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:07:59.412724   74017 config.go:182] Loaded profile config "functional-511891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:07:59.413346   74017 cli_runner.go:164] Run: docker container inspect functional-511891 --format={{.State.Status}}
I0819 18:07:59.439568   74017 ssh_runner.go:195] Run: systemctl --version
I0819 18:07:59.439696   74017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-511891
I0819 18:07:59.461460   74017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/functional-511891/id_rsa Username:docker}
I0819 18:07:59.561005   74017 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-511891 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/kicbase/echo-server           | functional-511891  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| docker.io/library/nginx                 | latest             | 5ef79149e0ec8 | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | alpine             | 0f0eda053dc5c | 44.7MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| localhost/minikube-local-cache-test     | functional-511891  | f3c8f37594b92 | 3.33kB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-511891 image ls --format table --alsologtostderr:
I0819 18:07:59.945273   74272 out.go:345] Setting OutFile to fd 1 ...
I0819 18:07:59.945656   74272 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:07:59.945672   74272 out.go:358] Setting ErrFile to fd 2...
I0819 18:07:59.945679   74272 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:07:59.945997   74272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-24160/.minikube/bin
I0819 18:07:59.946762   74272 config.go:182] Loaded profile config "functional-511891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:07:59.946912   74272 config.go:182] Loaded profile config "functional-511891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:07:59.947418   74272 cli_runner.go:164] Run: docker container inspect functional-511891 --format={{.State.Status}}
I0819 18:07:59.972018   74272 ssh_runner.go:195] Run: systemctl --version
I0819 18:07:59.972065   74272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-511891
I0819 18:07:59.995368   74272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/functional-511891/id_rsa Username:docker}
I0819 18:08:00.085347   74272 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-511891 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e
582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5
.7"],"size":"519571821"},{"id":"f3c8f37594b92f3f3241e4ab8c57a686b814bfa830c668580806b223a94aa4dc","repoDigests":["localhost/minikube-local-cache-test@sha256:26ffa8448e5f8a3a4fa61f95229c96768d011ba35dc4ddd02d410bb98c920e92"],"repoTags":["localhost/minikube-local-cache-test:functional-511891"],"size":"3330"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230
fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a","repoDigests":["docker.io/library/nginx@sha256:0c57fe90551cf
d8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0","docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44668625"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f"],"repoTags":["docker.io/library/nginx:latest"],"size":"191841612"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-ap
iserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256
:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-511891"],"size":"4943877"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-511891 image ls --format json --alsologtostderr:
I0819 18:07:59.655501   74120 out.go:345] Setting OutFile to fd 1 ...
I0819 18:07:59.655619   74120 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:07:59.655628   74120 out.go:358] Setting ErrFile to fd 2...
I0819 18:07:59.655632   74120 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:07:59.655822   74120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-24160/.minikube/bin
I0819 18:07:59.656350   74120 config.go:182] Loaded profile config "functional-511891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:07:59.656435   74120 config.go:182] Loaded profile config "functional-511891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:07:59.656800   74120 cli_runner.go:164] Run: docker container inspect functional-511891 --format={{.State.Status}}
I0819 18:07:59.683872   74120 ssh_runner.go:195] Run: systemctl --version
I0819 18:07:59.683914   74120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-511891
I0819 18:07:59.706116   74120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/functional-511891/id_rsa Username:docker}
I0819 18:07:59.809803   74120 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-511891 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-511891
size: "4943877"
- id: f3c8f37594b92f3f3241e4ab8c57a686b814bfa830c668580806b223a94aa4dc
repoDigests:
- localhost/minikube-local-cache-test@sha256:26ffa8448e5f8a3a4fa61f95229c96768d011ba35dc4ddd02d410bb98c920e92
repoTags:
- localhost/minikube-local-cache-test:functional-511891
size: "3330"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a
repoDigests:
- docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "44668625"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f
repoTags:
- docker.io/library/nginx:latest
size: "191841612"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-511891 image ls --format yaml --alsologtostderr:
I0819 18:07:59.443973   74035 out.go:345] Setting OutFile to fd 1 ...
I0819 18:07:59.444081   74035 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:07:59.444091   74035 out.go:358] Setting ErrFile to fd 2...
I0819 18:07:59.444097   74035 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:07:59.444325   74035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-24160/.minikube/bin
I0819 18:07:59.445026   74035 config.go:182] Loaded profile config "functional-511891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:07:59.445176   74035 config.go:182] Loaded profile config "functional-511891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:07:59.445667   74035 cli_runner.go:164] Run: docker container inspect functional-511891 --format={{.State.Status}}
I0819 18:07:59.468734   74035 ssh_runner.go:195] Run: systemctl --version
I0819 18:07:59.468777   74035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-511891
I0819 18:07:59.487292   74035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/functional-511891/id_rsa Username:docker}
I0819 18:07:59.569443   74035 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-511891 ssh pgrep buildkitd: exit status 1 (270.72047ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 image build -t localhost/my-image:functional-511891 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-511891 image build -t localhost/my-image:functional-511891 testdata/build --alsologtostderr: (3.507510029s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-511891 image build -t localhost/my-image:functional-511891 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 577975b14e2
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-511891
--> c46d09f106b
Successfully tagged localhost/my-image:functional-511891
c46d09f106bac3e9e3ab783ca520ffc1c98b17e5ff9aeea63ba3043b1070163e
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-511891 image build -t localhost/my-image:functional-511891 testdata/build --alsologtostderr:
I0819 18:07:59.931832   74261 out.go:345] Setting OutFile to fd 1 ...
I0819 18:07:59.932027   74261 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:07:59.932038   74261 out.go:358] Setting ErrFile to fd 2...
I0819 18:07:59.932058   74261 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:07:59.932365   74261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-24160/.minikube/bin
I0819 18:07:59.933202   74261 config.go:182] Loaded profile config "functional-511891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:07:59.933917   74261 config.go:182] Loaded profile config "functional-511891": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0819 18:07:59.934540   74261 cli_runner.go:164] Run: docker container inspect functional-511891 --format={{.State.Status}}
I0819 18:07:59.958498   74261 ssh_runner.go:195] Run: systemctl --version
I0819 18:07:59.958540   74261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-511891
I0819 18:07:59.980503   74261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/functional-511891/id_rsa Username:docker}
I0819 18:08:00.074179   74261 build_images.go:161] Building image from path: /tmp/build.3268305047.tar
I0819 18:08:00.074237   74261 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0819 18:08:00.085147   74261 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3268305047.tar
I0819 18:08:00.092662   74261 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3268305047.tar: stat -c "%s %y" /var/lib/minikube/build/build.3268305047.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3268305047.tar': No such file or directory
I0819 18:08:00.092697   74261 ssh_runner.go:362] scp /tmp/build.3268305047.tar --> /var/lib/minikube/build/build.3268305047.tar (3072 bytes)
I0819 18:08:00.164553   74261 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3268305047
I0819 18:08:00.174115   74261 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3268305047 -xf /var/lib/minikube/build/build.3268305047.tar
I0819 18:08:00.183539   74261 crio.go:315] Building image: /var/lib/minikube/build/build.3268305047
I0819 18:08:00.183610   74261 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-511891 /var/lib/minikube/build/build.3268305047 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0819 18:08:03.366131   74261 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-511891 /var/lib/minikube/build/build.3268305047 --cgroup-manager=cgroupfs: (3.182494286s)
I0819 18:08:03.366189   74261 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3268305047
I0819 18:08:03.374050   74261 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3268305047.tar
I0819 18:08:03.381764   74261 build_images.go:217] Built localhost/my-image:functional-511891 from /tmp/build.3268305047.tar
I0819 18:08:03.381794   74261 build_images.go:133] succeeded building to: functional-511891
I0819 18:08:03.381799   74261 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 image ls
2024/08/19 18:08:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-511891
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 image load --daemon kicbase/echo-server:functional-511891 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-511891 image load --daemon kicbase/echo-server:functional-511891 --alsologtostderr: (1.093045821s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 image load --daemon kicbase/echo-server:functional-511891 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p functional-511891 image load --daemon kicbase/echo-server:functional-511891 --alsologtostderr: (1.008728396s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-511891 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-511891 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-511891 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 67491: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-511891 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-511891 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-511891 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [2acca853-b659-4f09-8e98-4d141896df06] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [2acca853-b659-4f09-8e98-4d141896df06] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 17.003734734s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-511891
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 image load --daemon kicbase/echo-server:functional-511891 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 image save kicbase/echo-server:functional-511891 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-511891 image save kicbase/echo-server:functional-511891 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.373315494s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 image rm kicbase/echo-server:functional-511891 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-511891 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.464060447s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-511891
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 image save --daemon kicbase/echo-server:functional-511891 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-511891 image save --daemon kicbase/echo-server:functional-511891 --alsologtostderr: (1.816968905s)
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-511891
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-511891 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.151.95 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-511891 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-511891 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-511891 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-9fnsw" [18803a09-6a6d-4abc-aa08-6607ade18fc1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-9fnsw" [18803a09-6a6d-4abc-aa08-6607ade18fc1] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003612706s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "265.247414ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "43.324028ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "270.256867ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "42.474092ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-511891 /tmp/TestFunctionalparallelMountCmdany-port1795915248/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724090870015586221" to /tmp/TestFunctionalparallelMountCmdany-port1795915248/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724090870015586221" to /tmp/TestFunctionalparallelMountCmdany-port1795915248/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724090870015586221" to /tmp/TestFunctionalparallelMountCmdany-port1795915248/001/test-1724090870015586221
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-511891 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (239.017435ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 19 18:07 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 19 18:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 19 18:07 test-1724090870015586221
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh cat /mount-9p/test-1724090870015586221
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-511891 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [46d22dda-528e-4aa1-943c-5d8c37269c46] Pending
helpers_test.go:344: "busybox-mount" [46d22dda-528e-4aa1-943c-5d8c37269c46] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [46d22dda-528e-4aa1-943c-5d8c37269c46] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [46d22dda-528e-4aa1-943c-5d8c37269c46] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.068650223s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-511891 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-511891 /tmp/TestFunctionalparallelMountCmdany-port1795915248/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-511891 /tmp/TestFunctionalparallelMountCmdspecific-port3543718639/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-511891 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (253.479666ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-511891 /tmp/TestFunctionalparallelMountCmdspecific-port3543718639/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-511891 ssh "sudo umount -f /mount-9p": exit status 1 (257.465772ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-511891 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-511891 /tmp/TestFunctionalparallelMountCmdspecific-port3543718639/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 service list -o json
functional_test.go:1494: Took "892.243873ms" to run "out/minikube-linux-amd64 -p functional-511891 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-511891 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2496454814/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-511891 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2496454814/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-511891 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2496454814/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-511891 ssh "findmnt -T" /mount1: exit status 1 (337.42892ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-511891 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-511891 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2496454814/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-511891 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2496454814/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-511891 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2496454814/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31130
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31130
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-511891 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.72s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-511891
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-511891
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-511891
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (97.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-896148 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0819 18:09:42.067087   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:09:42.074164   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:09:42.085541   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:09:42.107060   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:09:42.148405   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:09:42.229826   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:09:42.391936   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:09:42.713540   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:09:43.355698   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:09:44.637484   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-896148 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m36.684058457s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 status -v=7 --alsologtostderr
E0819 18:09:47.198765   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/StartCluster (97.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896148 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896148 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-896148 -- rollout status deployment/busybox: (1.998248559s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896148 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896148 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896148 -- exec busybox-7dff88458-bvhd5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896148 -- exec busybox-7dff88458-dlprk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896148 -- exec busybox-7dff88458-hb5l6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896148 -- exec busybox-7dff88458-bvhd5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896148 -- exec busybox-7dff88458-dlprk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896148 -- exec busybox-7dff88458-hb5l6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896148 -- exec busybox-7dff88458-bvhd5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896148 -- exec busybox-7dff88458-dlprk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896148 -- exec busybox-7dff88458-hb5l6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896148 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896148 -- exec busybox-7dff88458-bvhd5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896148 -- exec busybox-7dff88458-bvhd5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896148 -- exec busybox-7dff88458-dlprk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896148 -- exec busybox-7dff88458-dlprk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896148 -- exec busybox-7dff88458-hb5l6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-896148 -- exec busybox-7dff88458-hb5l6 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (31.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-896148 -v=7 --alsologtostderr
E0819 18:09:52.320991   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:10:02.563335   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:10:23.044979   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-896148 -v=7 --alsologtostderr: (30.980966242s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (31.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-896148 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 cp testdata/cp-test.txt ha-896148:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 cp ha-896148:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1081370510/001/cp-test_ha-896148.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 cp ha-896148:/home/docker/cp-test.txt ha-896148-m02:/home/docker/cp-test_ha-896148_ha-896148-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m02 "sudo cat /home/docker/cp-test_ha-896148_ha-896148-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 cp ha-896148:/home/docker/cp-test.txt ha-896148-m03:/home/docker/cp-test_ha-896148_ha-896148-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m03 "sudo cat /home/docker/cp-test_ha-896148_ha-896148-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 cp ha-896148:/home/docker/cp-test.txt ha-896148-m04:/home/docker/cp-test_ha-896148_ha-896148-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m04 "sudo cat /home/docker/cp-test_ha-896148_ha-896148-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 cp testdata/cp-test.txt ha-896148-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 cp ha-896148-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1081370510/001/cp-test_ha-896148-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 cp ha-896148-m02:/home/docker/cp-test.txt ha-896148:/home/docker/cp-test_ha-896148-m02_ha-896148.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148 "sudo cat /home/docker/cp-test_ha-896148-m02_ha-896148.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 cp ha-896148-m02:/home/docker/cp-test.txt ha-896148-m03:/home/docker/cp-test_ha-896148-m02_ha-896148-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m03 "sudo cat /home/docker/cp-test_ha-896148-m02_ha-896148-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 cp ha-896148-m02:/home/docker/cp-test.txt ha-896148-m04:/home/docker/cp-test_ha-896148-m02_ha-896148-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m04 "sudo cat /home/docker/cp-test_ha-896148-m02_ha-896148-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 cp testdata/cp-test.txt ha-896148-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 cp ha-896148-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1081370510/001/cp-test_ha-896148-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 cp ha-896148-m03:/home/docker/cp-test.txt ha-896148:/home/docker/cp-test_ha-896148-m03_ha-896148.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148 "sudo cat /home/docker/cp-test_ha-896148-m03_ha-896148.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 cp ha-896148-m03:/home/docker/cp-test.txt ha-896148-m02:/home/docker/cp-test_ha-896148-m03_ha-896148-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m02 "sudo cat /home/docker/cp-test_ha-896148-m03_ha-896148-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 cp ha-896148-m03:/home/docker/cp-test.txt ha-896148-m04:/home/docker/cp-test_ha-896148-m03_ha-896148-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m04 "sudo cat /home/docker/cp-test_ha-896148-m03_ha-896148-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 cp testdata/cp-test.txt ha-896148-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 cp ha-896148-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1081370510/001/cp-test_ha-896148-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 cp ha-896148-m04:/home/docker/cp-test.txt ha-896148:/home/docker/cp-test_ha-896148-m04_ha-896148.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148 "sudo cat /home/docker/cp-test_ha-896148-m04_ha-896148.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 cp ha-896148-m04:/home/docker/cp-test.txt ha-896148-m02:/home/docker/cp-test_ha-896148-m04_ha-896148-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m02 "sudo cat /home/docker/cp-test_ha-896148-m04_ha-896148-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 cp ha-896148-m04:/home/docker/cp-test.txt ha-896148-m03:/home/docker/cp-test_ha-896148-m04_ha-896148-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 ssh -n ha-896148-m03 "sudo cat /home/docker/cp-test_ha-896148-m04_ha-896148-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-896148 node stop m02 -v=7 --alsologtostderr: (11.789109345s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-896148 status -v=7 --alsologtostderr: exit status 7 (609.275212ms)

                                                
                                                
-- stdout --
	ha-896148
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-896148-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-896148-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-896148-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:10:51.560706   95397 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:10:51.560980   95397 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:10:51.560991   95397 out.go:358] Setting ErrFile to fd 2...
	I0819 18:10:51.560997   95397 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:10:51.561255   95397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-24160/.minikube/bin
	I0819 18:10:51.561515   95397 out.go:352] Setting JSON to false
	I0819 18:10:51.561553   95397 mustload.go:65] Loading cluster: ha-896148
	I0819 18:10:51.561595   95397 notify.go:220] Checking for updates...
	I0819 18:10:51.562066   95397 config.go:182] Loaded profile config "ha-896148": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:10:51.562083   95397 status.go:255] checking status of ha-896148 ...
	I0819 18:10:51.562494   95397 cli_runner.go:164] Run: docker container inspect ha-896148 --format={{.State.Status}}
	I0819 18:10:51.580609   95397 status.go:330] ha-896148 host status = "Running" (err=<nil>)
	I0819 18:10:51.580630   95397 host.go:66] Checking if "ha-896148" exists ...
	I0819 18:10:51.580855   95397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896148
	I0819 18:10:51.597224   95397 host.go:66] Checking if "ha-896148" exists ...
	I0819 18:10:51.597499   95397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:10:51.597554   95397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148
	I0819 18:10:51.614907   95397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/ha-896148/id_rsa Username:docker}
	I0819 18:10:51.701966   95397 ssh_runner.go:195] Run: systemctl --version
	I0819 18:10:51.705789   95397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:10:51.715908   95397 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 18:10:51.762997   95397 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-08-19 18:10:51.753481454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 18:10:51.763611   95397 kubeconfig.go:125] found "ha-896148" server: "https://192.168.49.254:8443"
	I0819 18:10:51.763641   95397 api_server.go:166] Checking apiserver status ...
	I0819 18:10:51.763677   95397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:10:51.773897   95397 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1472/cgroup
	I0819 18:10:51.782078   95397 api_server.go:182] apiserver freezer: "9:freezer:/docker/9492a2c00d6808671da2ea3fbbe3dc36f0775f97f86abc0b4a9e1601c80a4a91/crio/crio-3e35ce2e0d4672cb8cdf467044076e5aaca6df216c3d1d4cba5b60a394ad9c6b"
	I0819 18:10:51.782130   95397 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9492a2c00d6808671da2ea3fbbe3dc36f0775f97f86abc0b4a9e1601c80a4a91/crio/crio-3e35ce2e0d4672cb8cdf467044076e5aaca6df216c3d1d4cba5b60a394ad9c6b/freezer.state
	I0819 18:10:51.789675   95397 api_server.go:204] freezer state: "THAWED"
	I0819 18:10:51.789702   95397 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0819 18:10:51.793041   95397 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0819 18:10:51.793060   95397 status.go:422] ha-896148 apiserver status = Running (err=<nil>)
	I0819 18:10:51.793068   95397 status.go:257] ha-896148 status: &{Name:ha-896148 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:10:51.793084   95397 status.go:255] checking status of ha-896148-m02 ...
	I0819 18:10:51.793419   95397 cli_runner.go:164] Run: docker container inspect ha-896148-m02 --format={{.State.Status}}
	I0819 18:10:51.809638   95397 status.go:330] ha-896148-m02 host status = "Stopped" (err=<nil>)
	I0819 18:10:51.809652   95397 status.go:343] host is not running, skipping remaining checks
	I0819 18:10:51.809658   95397 status.go:257] ha-896148-m02 status: &{Name:ha-896148-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:10:51.809678   95397 status.go:255] checking status of ha-896148-m03 ...
	I0819 18:10:51.809984   95397 cli_runner.go:164] Run: docker container inspect ha-896148-m03 --format={{.State.Status}}
	I0819 18:10:51.826126   95397 status.go:330] ha-896148-m03 host status = "Running" (err=<nil>)
	I0819 18:10:51.826147   95397 host.go:66] Checking if "ha-896148-m03" exists ...
	I0819 18:10:51.826387   95397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896148-m03
	I0819 18:10:51.842025   95397 host.go:66] Checking if "ha-896148-m03" exists ...
	I0819 18:10:51.842250   95397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:10:51.842283   95397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148-m03
	I0819 18:10:51.857704   95397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/ha-896148-m03/id_rsa Username:docker}
	I0819 18:10:51.941777   95397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:10:51.951937   95397 kubeconfig.go:125] found "ha-896148" server: "https://192.168.49.254:8443"
	I0819 18:10:51.951963   95397 api_server.go:166] Checking apiserver status ...
	I0819 18:10:51.952009   95397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:10:51.961290   95397 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup
	I0819 18:10:51.969609   95397 api_server.go:182] apiserver freezer: "9:freezer:/docker/7522e10788e71a36c7d209a0ce025c7a3c6cb3e5be221ac1802ff787824336ff/crio/crio-35eb6adaee5145330841f303eaaaaf5174f3fe42c983551df453caca3cb2e2d6"
	I0819 18:10:51.969670   95397 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7522e10788e71a36c7d209a0ce025c7a3c6cb3e5be221ac1802ff787824336ff/crio/crio-35eb6adaee5145330841f303eaaaaf5174f3fe42c983551df453caca3cb2e2d6/freezer.state
	I0819 18:10:51.977095   95397 api_server.go:204] freezer state: "THAWED"
	I0819 18:10:51.977140   95397 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0819 18:10:51.980932   95397 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0819 18:10:51.980951   95397 status.go:422] ha-896148-m03 apiserver status = Running (err=<nil>)
	I0819 18:10:51.980959   95397 status.go:257] ha-896148-m03 status: &{Name:ha-896148-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:10:51.980982   95397 status.go:255] checking status of ha-896148-m04 ...
	I0819 18:10:51.981258   95397 cli_runner.go:164] Run: docker container inspect ha-896148-m04 --format={{.State.Status}}
	I0819 18:10:51.997574   95397 status.go:330] ha-896148-m04 host status = "Running" (err=<nil>)
	I0819 18:10:51.997602   95397 host.go:66] Checking if "ha-896148-m04" exists ...
	I0819 18:10:51.997906   95397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-896148-m04
	I0819 18:10:52.013377   95397 host.go:66] Checking if "ha-896148-m04" exists ...
	I0819 18:10:52.013626   95397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:10:52.013655   95397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-896148-m04
	I0819 18:10:52.029406   95397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/ha-896148-m04/id_rsa Username:docker}
	I0819 18:10:52.117718   95397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:10:52.127528   95397 status.go:257] ha-896148-m04 status: &{Name:ha-896148-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (22.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 node start m02 -v=7 --alsologtostderr
E0819 18:11:04.007276   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-896148 node start m02 -v=7 --alsologtostderr: (21.621370923s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-896148 status -v=7 --alsologtostderr: (1.03116008s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (22.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.335236248s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (175.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-896148 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-896148 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-896148 -v=7 --alsologtostderr: (36.519598718s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-896148 --wait=true -v=7 --alsologtostderr
E0819 18:12:25.929666   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:12:29.626385   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:12:29.632757   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:12:29.644103   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:12:29.665453   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:12:29.706805   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:12:29.788229   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:12:29.949736   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:12:30.271629   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:12:30.913637   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:12:32.195452   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:12:34.756734   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:12:39.878043   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:12:50.119543   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:13:10.601789   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:13:51.564153   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-896148 --wait=true -v=7 --alsologtostderr: (2m18.506326568s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-896148
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (175.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-896148 node delete m03 -v=7 --alsologtostderr: (10.473326397s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 stop -v=7 --alsologtostderr
E0819 18:14:42.067725   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-896148 stop -v=7 --alsologtostderr: (35.2402861s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-896148 status -v=7 --alsologtostderr: exit status 7 (98.1929ms)

                                                
                                                
-- stdout --
	ha-896148
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-896148-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-896148-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:14:59.628906  112513 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:14:59.629165  112513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:14:59.629175  112513 out.go:358] Setting ErrFile to fd 2...
	I0819 18:14:59.629184  112513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:14:59.629364  112513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-24160/.minikube/bin
	I0819 18:14:59.629510  112513 out.go:352] Setting JSON to false
	I0819 18:14:59.629538  112513 mustload.go:65] Loading cluster: ha-896148
	I0819 18:14:59.629651  112513 notify.go:220] Checking for updates...
	I0819 18:14:59.629946  112513 config.go:182] Loaded profile config "ha-896148": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:14:59.629961  112513 status.go:255] checking status of ha-896148 ...
	I0819 18:14:59.630368  112513 cli_runner.go:164] Run: docker container inspect ha-896148 --format={{.State.Status}}
	I0819 18:14:59.652944  112513 status.go:330] ha-896148 host status = "Stopped" (err=<nil>)
	I0819 18:14:59.652962  112513 status.go:343] host is not running, skipping remaining checks
	I0819 18:14:59.652968  112513 status.go:257] ha-896148 status: &{Name:ha-896148 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:14:59.653000  112513 status.go:255] checking status of ha-896148-m02 ...
	I0819 18:14:59.653251  112513 cli_runner.go:164] Run: docker container inspect ha-896148-m02 --format={{.State.Status}}
	I0819 18:14:59.669179  112513 status.go:330] ha-896148-m02 host status = "Stopped" (err=<nil>)
	I0819 18:14:59.669196  112513 status.go:343] host is not running, skipping remaining checks
	I0819 18:14:59.669204  112513 status.go:257] ha-896148-m02 status: &{Name:ha-896148-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:14:59.669223  112513 status.go:255] checking status of ha-896148-m04 ...
	I0819 18:14:59.669490  112513 cli_runner.go:164] Run: docker container inspect ha-896148-m04 --format={{.State.Status}}
	I0819 18:14:59.686624  112513 status.go:330] ha-896148-m04 host status = "Stopped" (err=<nil>)
	I0819 18:14:59.686642  112513 status.go:343] host is not running, skipping remaining checks
	I0819 18:14:59.686649  112513 status.go:257] ha-896148-m04 status: &{Name:ha-896148-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-896148 --control-plane -v=7 --alsologtostderr
E0819 18:17:29.626645   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-896148 --control-plane -v=7 --alsologtostderr: (40.272127947s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-896148 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.60s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.53s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-139450 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0819 18:17:57.327409   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-139450 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (39.532517788s)
--- PASS: TestJSONOutput/start/Command (39.53s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-139450 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-139450 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.7s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-139450 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-139450 --output=json --user=testUser: (5.70080832s)
--- PASS: TestJSONOutput/stop/Command (5.70s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-910597 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-910597 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.498773ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"21c87d2f-9357-4f72-8e82-93616fb39743","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-910597] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3e766abe-6270-4b41-806c-ddffe90e1820","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19468"}}
	{"specversion":"1.0","id":"6bb25d83-9d41-4ace-a7f7-0a4ef01ba226","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2e19e5c1-0fe4-4853-894e-35041a7d4b58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19468-24160/kubeconfig"}}
	{"specversion":"1.0","id":"6ad84d88-e465-41c9-aca2-8d8e894b0c51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-24160/.minikube"}}
	{"specversion":"1.0","id":"147c69fe-a320-4e3b-825d-e56d3b58bbd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4f177fe6-d6a2-4098-8225-d8ad8feef688","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0e6a112c-300e-4bb4-8ad6-df3c3b5b19ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-910597" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-910597
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.36s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-770392 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-770392 --network=: (24.281109628s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-770392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-770392
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-770392: (2.060390212s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.36s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.59s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-458477 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-458477 --network=bridge: (21.706430923s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-458477" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-458477
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-458477: (1.867037548s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.59s)

                                                
                                    
x
+
TestKicExistingNetwork (25.33s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-862639 --network=existing-network
E0819 18:19:42.068050   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-862639 --network=existing-network: (23.312693205s)
helpers_test.go:175: Cleaning up "existing-network-862639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-862639
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-862639: (1.880451364s)
--- PASS: TestKicExistingNetwork (25.33s)

                                                
                                    
x
+
TestKicCustomSubnet (22.82s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-973039 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-973039 --subnet=192.168.60.0/24: (20.800423927s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-973039 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-973039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-973039
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-973039: (1.999528971s)
--- PASS: TestKicCustomSubnet (22.82s)

                                                
                                    
x
+
TestKicStaticIP (25.34s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-862481 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-862481 --static-ip=192.168.200.200: (23.260112585s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-862481 ip
helpers_test.go:175: Cleaning up "static-ip-862481" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-862481
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-862481: (1.962448919s)
--- PASS: TestKicStaticIP (25.34s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (50.61s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-326952 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-326952 --driver=docker  --container-runtime=crio: (22.78188201s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-330187 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-330187 --driver=docker  --container-runtime=crio: (22.894095215s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-326952
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-330187
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-330187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-330187
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-330187: (1.798461814s)
helpers_test.go:175: Cleaning up "first-326952" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-326952
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-326952: (2.140264231s)
--- PASS: TestMinikubeProfile (50.61s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-185567 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-185567 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.129457174s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-185567 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-197727 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-197727 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.235858431s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-197727 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.22s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-185567 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-185567 --alsologtostderr -v=5: (1.57889789s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-197727 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.22s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-197727
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-197727: (1.163055775s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.29s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-197727
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-197727: (6.288728335s)
--- PASS: TestMountStart/serial/RestartStopped (7.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-197727 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-620888 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0819 18:22:29.626167   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-620888 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m4.566098512s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (2.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-620888 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-620888 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-620888 -- rollout status deployment/busybox: (1.577728704s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-620888 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-620888 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-620888 -- exec busybox-7dff88458-4bht7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-620888 -- exec busybox-7dff88458-stgx4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-620888 -- exec busybox-7dff88458-4bht7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-620888 -- exec busybox-7dff88458-stgx4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-620888 -- exec busybox-7dff88458-4bht7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-620888 -- exec busybox-7dff88458-stgx4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (2.88s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-620888 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-620888 -- exec busybox-7dff88458-4bht7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-620888 -- exec busybox-7dff88458-4bht7 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-620888 -- exec busybox-7dff88458-stgx4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-620888 -- exec busybox-7dff88458-stgx4 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-620888 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-620888 -v 3 --alsologtostderr: (28.67492079s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.24s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-620888 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 cp testdata/cp-test.txt multinode-620888:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 ssh -n multinode-620888 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 cp multinode-620888:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4095321799/001/cp-test_multinode-620888.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 ssh -n multinode-620888 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 cp multinode-620888:/home/docker/cp-test.txt multinode-620888-m02:/home/docker/cp-test_multinode-620888_multinode-620888-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 ssh -n multinode-620888 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 ssh -n multinode-620888-m02 "sudo cat /home/docker/cp-test_multinode-620888_multinode-620888-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 cp multinode-620888:/home/docker/cp-test.txt multinode-620888-m03:/home/docker/cp-test_multinode-620888_multinode-620888-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 ssh -n multinode-620888 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 ssh -n multinode-620888-m03 "sudo cat /home/docker/cp-test_multinode-620888_multinode-620888-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 cp testdata/cp-test.txt multinode-620888-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 ssh -n multinode-620888-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 cp multinode-620888-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4095321799/001/cp-test_multinode-620888-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 ssh -n multinode-620888-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 cp multinode-620888-m02:/home/docker/cp-test.txt multinode-620888:/home/docker/cp-test_multinode-620888-m02_multinode-620888.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 ssh -n multinode-620888-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 ssh -n multinode-620888 "sudo cat /home/docker/cp-test_multinode-620888-m02_multinode-620888.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 cp multinode-620888-m02:/home/docker/cp-test.txt multinode-620888-m03:/home/docker/cp-test_multinode-620888-m02_multinode-620888-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 ssh -n multinode-620888-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 ssh -n multinode-620888-m03 "sudo cat /home/docker/cp-test_multinode-620888-m02_multinode-620888-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 cp testdata/cp-test.txt multinode-620888-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 ssh -n multinode-620888-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 cp multinode-620888-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4095321799/001/cp-test_multinode-620888-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 ssh -n multinode-620888-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 cp multinode-620888-m03:/home/docker/cp-test.txt multinode-620888:/home/docker/cp-test_multinode-620888-m03_multinode-620888.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 ssh -n multinode-620888-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 ssh -n multinode-620888 "sudo cat /home/docker/cp-test_multinode-620888-m03_multinode-620888.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 cp multinode-620888-m03:/home/docker/cp-test.txt multinode-620888-m02:/home/docker/cp-test_multinode-620888-m03_multinode-620888-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 ssh -n multinode-620888-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 ssh -n multinode-620888-m02 "sudo cat /home/docker/cp-test_multinode-620888-m03_multinode-620888-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-620888 node stop m03: (1.162918324s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-620888 status: exit status 7 (431.588057ms)

                                                
                                                
-- stdout --
	multinode-620888
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-620888-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-620888-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-620888 status --alsologtostderr: exit status 7 (428.028627ms)

                                                
                                                
-- stdout --
	multinode-620888
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-620888-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-620888-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:23:52.229913  180037 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:23:52.230184  180037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:23:52.230193  180037 out.go:358] Setting ErrFile to fd 2...
	I0819 18:23:52.230198  180037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:23:52.230427  180037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-24160/.minikube/bin
	I0819 18:23:52.230629  180037 out.go:352] Setting JSON to false
	I0819 18:23:52.230653  180037 mustload.go:65] Loading cluster: multinode-620888
	I0819 18:23:52.230685  180037 notify.go:220] Checking for updates...
	I0819 18:23:52.231061  180037 config.go:182] Loaded profile config "multinode-620888": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:23:52.231076  180037 status.go:255] checking status of multinode-620888 ...
	I0819 18:23:52.231477  180037 cli_runner.go:164] Run: docker container inspect multinode-620888 --format={{.State.Status}}
	I0819 18:23:52.248208  180037 status.go:330] multinode-620888 host status = "Running" (err=<nil>)
	I0819 18:23:52.248228  180037 host.go:66] Checking if "multinode-620888" exists ...
	I0819 18:23:52.248478  180037 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-620888
	I0819 18:23:52.264713  180037 host.go:66] Checking if "multinode-620888" exists ...
	I0819 18:23:52.264960  180037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:23:52.265019  180037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-620888
	I0819 18:23:52.281671  180037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/multinode-620888/id_rsa Username:docker}
	I0819 18:23:52.365720  180037 ssh_runner.go:195] Run: systemctl --version
	I0819 18:23:52.369448  180037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:23:52.379383  180037 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 18:23:52.424344  180037 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-08-19 18:23:52.415290022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 18:23:52.424899  180037 kubeconfig.go:125] found "multinode-620888" server: "https://192.168.67.2:8443"
	I0819 18:23:52.424927  180037 api_server.go:166] Checking apiserver status ...
	I0819 18:23:52.424963  180037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:23:52.435036  180037 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1470/cgroup
	I0819 18:23:52.443406  180037 api_server.go:182] apiserver freezer: "9:freezer:/docker/cd91adbcee29cecf850e569024f3bd033003d16e85cb1a3437ce9c19ee620a0c/crio/crio-38f05f26ca617756781d86a95520976fa3749f3bb4d5bc5a15b5640a99c775ae"
	I0819 18:23:52.443461  180037 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cd91adbcee29cecf850e569024f3bd033003d16e85cb1a3437ce9c19ee620a0c/crio/crio-38f05f26ca617756781d86a95520976fa3749f3bb4d5bc5a15b5640a99c775ae/freezer.state
	I0819 18:23:52.451030  180037 api_server.go:204] freezer state: "THAWED"
	I0819 18:23:52.451058  180037 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0819 18:23:52.454484  180037 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0819 18:23:52.454503  180037 status.go:422] multinode-620888 apiserver status = Running (err=<nil>)
	I0819 18:23:52.454512  180037 status.go:257] multinode-620888 status: &{Name:multinode-620888 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:23:52.454528  180037 status.go:255] checking status of multinode-620888-m02 ...
	I0819 18:23:52.454784  180037 cli_runner.go:164] Run: docker container inspect multinode-620888-m02 --format={{.State.Status}}
	I0819 18:23:52.471373  180037 status.go:330] multinode-620888-m02 host status = "Running" (err=<nil>)
	I0819 18:23:52.471397  180037 host.go:66] Checking if "multinode-620888-m02" exists ...
	I0819 18:23:52.471701  180037 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-620888-m02
	I0819 18:23:52.488799  180037 host.go:66] Checking if "multinode-620888-m02" exists ...
	I0819 18:23:52.489037  180037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:23:52.489075  180037 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-620888-m02
	I0819 18:23:52.505167  180037 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19468-24160/.minikube/machines/multinode-620888-m02/id_rsa Username:docker}
	I0819 18:23:52.589905  180037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:23:52.599701  180037 status.go:257] multinode-620888-m02 status: &{Name:multinode-620888-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:23:52.599730  180037 status.go:255] checking status of multinode-620888-m03 ...
	I0819 18:23:52.599951  180037 cli_runner.go:164] Run: docker container inspect multinode-620888-m03 --format={{.State.Status}}
	I0819 18:23:52.616765  180037 status.go:330] multinode-620888-m03 host status = "Stopped" (err=<nil>)
	I0819 18:23:52.616796  180037 status.go:343] host is not running, skipping remaining checks
	I0819 18:23:52.616805  180037 status.go:257] multinode-620888-m03 status: &{Name:multinode-620888-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.02s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-620888 node start m03 -v=7 --alsologtostderr: (8.072913251s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (92.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-620888
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-620888
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-620888: (24.594939031s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-620888 --wait=true -v=8 --alsologtostderr
E0819 18:24:42.067021   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-620888 --wait=true -v=8 --alsologtostderr: (1m8.112201311s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-620888
--- PASS: TestMultiNode/serial/RestartKeepsNodes (92.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-620888 node delete m03: (4.621649031s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-620888 stop: (23.424290957s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-620888 status: exit status 7 (77.498017ms)

                                                
                                                
-- stdout --
	multinode-620888
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-620888-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-620888 status --alsologtostderr: exit status 7 (76.995149ms)

                                                
                                                
-- stdout --
	multinode-620888
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-620888-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:26:02.774649  189733 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:26:02.774898  189733 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:26:02.774906  189733 out.go:358] Setting ErrFile to fd 2...
	I0819 18:26:02.774910  189733 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:26:02.775090  189733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-24160/.minikube/bin
	I0819 18:26:02.775243  189733 out.go:352] Setting JSON to false
	I0819 18:26:02.775267  189733 mustload.go:65] Loading cluster: multinode-620888
	I0819 18:26:02.775308  189733 notify.go:220] Checking for updates...
	I0819 18:26:02.775673  189733 config.go:182] Loaded profile config "multinode-620888": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:26:02.775687  189733 status.go:255] checking status of multinode-620888 ...
	I0819 18:26:02.776097  189733 cli_runner.go:164] Run: docker container inspect multinode-620888 --format={{.State.Status}}
	I0819 18:26:02.793599  189733 status.go:330] multinode-620888 host status = "Stopped" (err=<nil>)
	I0819 18:26:02.793618  189733 status.go:343] host is not running, skipping remaining checks
	I0819 18:26:02.793624  189733 status.go:257] multinode-620888 status: &{Name:multinode-620888 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:26:02.793663  189733 status.go:255] checking status of multinode-620888-m02 ...
	I0819 18:26:02.793899  189733 cli_runner.go:164] Run: docker container inspect multinode-620888-m02 --format={{.State.Status}}
	I0819 18:26:02.809945  189733 status.go:330] multinode-620888-m02 host status = "Stopped" (err=<nil>)
	I0819 18:26:02.809980  189733 status.go:343] host is not running, skipping remaining checks
	I0819 18:26:02.809987  189733 status.go:257] multinode-620888-m02 status: &{Name:multinode-620888-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (46.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-620888 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0819 18:26:05.133976   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-620888 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (45.649401439s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-620888 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (46.17s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-620888
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-620888-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-620888-m02 --driver=docker  --container-runtime=crio: exit status 14 (57.372674ms)

                                                
                                                
-- stdout --
	* [multinode-620888-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-24160/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-24160/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-620888-m02' is duplicated with machine name 'multinode-620888-m02' in profile 'multinode-620888'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-620888-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-620888-m03 --driver=docker  --container-runtime=crio: (20.979033364s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-620888
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-620888: exit status 80 (245.169295ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-620888 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-620888-m03 already exists in multinode-620888-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-620888-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-620888-m03: (1.810786306s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.13s)

                                                
                                    
x
+
TestPreload (103.31s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-544459 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0819 18:27:29.627199   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-544459 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m18.829738469s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-544459 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-544459
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-544459: (5.589191765s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-544459 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0819 18:28:52.689353   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-544459 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (15.839178816s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-544459 image list
helpers_test.go:175: Cleaning up "test-preload-544459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-544459
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-544459: (1.939711382s)
--- PASS: TestPreload (103.31s)

                                                
                                    
x
+
TestScheduledStopUnix (96.25s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-153263 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-153263 --memory=2048 --driver=docker  --container-runtime=crio: (20.514223717s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-153263 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-153263 -n scheduled-stop-153263
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-153263 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-153263 --cancel-scheduled
E0819 18:29:42.066730   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-153263 -n scheduled-stop-153263
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-153263
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-153263 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-153263
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-153263: exit status 7 (61.866258ms)

                                                
                                                
-- stdout --
	scheduled-stop-153263
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-153263 -n scheduled-stop-153263
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-153263 -n scheduled-stop-153263: exit status 7 (60.814005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-153263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-153263
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-153263: (4.512575871s)
--- PASS: TestScheduledStopUnix (96.25s)

                                                
                                    
x
+
TestInsufficientStorage (12.43s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-756628 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-756628 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.156744733s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"253a0a61-92d8-44df-ac4e-6c9891ccd65d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-756628] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"aafbb384-f003-4b6e-8487-1fdc267cad3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19468"}}
	{"specversion":"1.0","id":"1b95222a-a0a4-40ea-9c13-d6dd11407e3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a591b61b-cf95-4afd-846b-0ea4d234683f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19468-24160/kubeconfig"}}
	{"specversion":"1.0","id":"e9e218f9-419c-4d88-9577-dd0ead991518","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-24160/.minikube"}}
	{"specversion":"1.0","id":"118d2063-236c-47de-b68d-ead03a03f321","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4dad19b5-5c28-4c53-be4c-5e76d755d27e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d8818a03-bf74-466d-afdc-62dd8df573c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"37d0b911-15ec-425e-b63f-21f3d52b471c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8de648ba-32db-4fa3-a756-d415661ad140","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"076a1a3c-7f3b-4056-b52f-81b4b2eaa98c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"6c38da0b-c1dd-4a8e-bd01-b750e5a0785a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-756628\" primary control-plane node in \"insufficient-storage-756628\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2b147ae2-4fdf-4c19-863d-d75f4debaf7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1723740748-19452 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3d899733-5ac7-487f-9b1f-84caf5ce31e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1d55e93a-9642-4735-9b0e-08eff201584c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-756628 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-756628 --output=json --layout=cluster: exit status 7 (245.747109ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-756628","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-756628","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:30:45.869561  212227 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-756628" does not appear in /home/jenkins/minikube-integration/19468-24160/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-756628 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-756628 --output=json --layout=cluster: exit status 7 (245.050242ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-756628","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-756628","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:30:46.115107  212325 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-756628" does not appear in /home/jenkins/minikube-integration/19468-24160/kubeconfig
	E0819 18:30:46.124574  212325 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/insufficient-storage-756628/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-756628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-756628
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-756628: (1.785134344s)
--- PASS: TestInsufficientStorage (12.43s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (51.06s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.640384854 start -p running-upgrade-862756 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.640384854 start -p running-upgrade-862756 --memory=2200 --vm-driver=docker  --container-runtime=crio: (26.996167678s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-862756 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-862756 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.312343796s)
helpers_test.go:175: Cleaning up "running-upgrade-862756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-862756
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-862756: (2.345258245s)
--- PASS: TestRunningBinaryUpgrade (51.06s)

                                                
                                    
x
+
TestKubernetesUpgrade (344.46s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-068381 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0819 18:32:29.626455   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-068381 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.924209679s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-068381
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-068381: (1.175894623s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-068381 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-068381 status --format={{.Host}}: exit status 7 (66.712542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-068381 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-068381 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.417896405s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-068381 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-068381 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-068381 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (74.126774ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-068381] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-24160/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-24160/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-068381
	    minikube start -p kubernetes-upgrade-068381 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0683812 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-068381 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-068381 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0819 18:37:29.626441   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-068381 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.27898781s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-068381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-068381
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-068381: (2.45426958s)
--- PASS: TestKubernetesUpgrade (344.46s)

                                                
                                    
x
+
TestMissingContainerUpgrade (119.33s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3244086093 start -p missing-upgrade-538614 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3244086093 start -p missing-upgrade-538614 --memory=2200 --driver=docker  --container-runtime=crio: (55.412126661s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-538614
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-538614: (10.411437481s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-538614
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-538614 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-538614 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.192761193s)
helpers_test.go:175: Cleaning up "missing-upgrade-538614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-538614
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-538614: (1.899672186s)
--- PASS: TestMissingContainerUpgrade (119.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-473470 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-473470 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (71.414684ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-473470] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-24160/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-24160/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-473470 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-473470 --driver=docker  --container-runtime=crio: (33.186943737s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-473470 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (7.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-332815 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-332815 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (177.182448ms)

                                                
                                                
-- stdout --
	* [false-332815] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19468
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19468-24160/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-24160/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:30:51.586953  214622 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:30:51.587052  214622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:30:51.587058  214622 out.go:358] Setting ErrFile to fd 2...
	I0819 18:30:51.587063  214622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:30:51.587249  214622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19468-24160/.minikube/bin
	I0819 18:30:51.587834  214622 out.go:352] Setting JSON to false
	I0819 18:30:51.588790  214622 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":8002,"bootTime":1724084250,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0819 18:30:51.588869  214622 start.go:139] virtualization: kvm guest
	I0819 18:30:51.591558  214622 out.go:177] * [false-332815] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0819 18:30:51.593093  214622 notify.go:220] Checking for updates...
	I0819 18:30:51.593763  214622 out.go:177]   - MINIKUBE_LOCATION=19468
	I0819 18:30:51.595425  214622 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:30:51.596971  214622 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19468-24160/kubeconfig
	I0819 18:30:51.598557  214622 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19468-24160/.minikube
	I0819 18:30:51.604182  214622 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0819 18:30:51.605350  214622 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:30:51.607391  214622 config.go:182] Loaded profile config "NoKubernetes-473470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:30:51.607531  214622 config.go:182] Loaded profile config "force-systemd-env-496583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:30:51.607663  214622 config.go:182] Loaded profile config "offline-crio-464724": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0819 18:30:51.607774  214622 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:30:51.636168  214622 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 18:30:51.636309  214622 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 18:30:51.700316  214622 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:93 SystemTime:2024-08-19 18:30:51.688380004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0819 18:30:51.700465  214622 docker.go:307] overlay module found
	I0819 18:30:51.702560  214622 out.go:177] * Using the docker driver based on user configuration
	I0819 18:30:51.703869  214622 start.go:297] selected driver: docker
	I0819 18:30:51.703895  214622 start.go:901] validating driver "docker" against <nil>
	I0819 18:30:51.703909  214622 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:30:51.706367  214622 out.go:201] 
	W0819 18:30:51.707537  214622 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0819 18:30:51.708681  214622 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-332815 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-332815

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-332815

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-332815

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-332815

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-332815

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-332815

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-332815

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-332815

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-332815

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-332815

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-332815

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-332815" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-332815" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-332815

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-332815"

                                                
                                                
----------------------- debugLogs end: false-332815 [took: 7.26102898s] --------------------------------
helpers_test.go:175: Cleaning up "false-332815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-332815
--- PASS: TestNetworkPlugins/group/false (7.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-473470 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-473470 --no-kubernetes --driver=docker  --container-runtime=crio: (4.095623853s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-473470 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-473470 status -o json: exit status 2 (346.214946ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-473470","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-473470
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-473470: (3.561000492s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-473470 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-473470 --no-kubernetes --driver=docker  --container-runtime=crio: (7.536404883s)
--- PASS: TestNoKubernetes/serial/Start (7.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-473470 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-473470 "sudo systemctl is-active --quiet service kubelet": exit status 1 (244.315777ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-473470
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-473470: (1.19274457s)
--- PASS: TestNoKubernetes/serial/Stop (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-473470 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-473470 --driver=docker  --container-runtime=crio: (8.968720602s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-473470 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-473470 "sudo systemctl is-active --quiet service kubelet": exit status 1 (299.79622ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (88.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3082953516 start -p stopped-upgrade-742592 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3082953516 start -p stopped-upgrade-742592 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m2.670060668s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3082953516 -p stopped-upgrade-742592 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3082953516 -p stopped-upgrade-742592 stop: (2.454598993s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-742592 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-742592 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.255067244s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (88.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-742592
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.80s)

                                                
                                    
x
+
TestPause/serial/Start (43.33s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-225342 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-225342 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (43.326209993s)
--- PASS: TestPause/serial/Start (43.33s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (33.6s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-225342 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-225342 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.585840927s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (33.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-332815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-332815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (43.906271661s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.91s)

                                                
                                    
x
+
TestPause/serial/Pause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-225342 --alsologtostderr -v=5
E0819 18:34:42.067176   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestPause/serial/Pause (0.75s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-225342 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-225342 --output=json --layout=cluster: exit status 2 (333.599793ms)

                                                
                                                
-- stdout --
	{"Name":"pause-225342","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-225342","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-225342 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.76s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-225342 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.76s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.71s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-225342 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-225342 --alsologtostderr -v=5: (3.712422156s)
--- PASS: TestPause/serial/DeletePaused (3.71s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-225342
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-225342: exit status 1 (20.863637ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-225342: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-332815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-332815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (42.558982964s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (52.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-332815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-332815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (52.226356092s)
--- PASS: TestNetworkPlugins/group/calico/Start (52.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-332815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-332815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-s286k" [180caa17-579e-4090-a76d-bcbcf212423a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-s286k" [180caa17-579e-4090-a76d-bcbcf212423a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004457116s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2m67s" [25f362eb-26f0-4da5-b5b4-79cbb4769c50] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003621323s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-332815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-332815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-332815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-332815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-332815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5v4tc" [6a9f63e1-0b6c-4577-b4f2-cfe69b109ab4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5v4tc" [6a9f63e1-0b6c-4577-b4f2-cfe69b109ab4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.002878895s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-332815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-332815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-332815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (44.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-332815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-332815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (44.541016289s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (44.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-cwrd7" [ce8486c2-4618-47fc-a421-2e1a4cd8bdb1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004551201s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-332815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-332815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fnghx" [7b61490b-7894-49d0-91cb-17b083e0d747] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fnghx" [7b61490b-7894-49d0-91cb-17b083e0d747] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004153279s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (60s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-332815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-332815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (59.996794747s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (60.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-332815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-332815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-332815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (47.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-332815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-332815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (47.628385786s)
--- PASS: TestNetworkPlugins/group/flannel/Start (47.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-332815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-332815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xdqgz" [07cdabbb-c530-4087-9a98-391ddee93bc7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xdqgz" [07cdabbb-c530-4087-9a98-391ddee93bc7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.00407063s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-332815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-332815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-332815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-332815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-332815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ngk2f" [90baf67d-3e7c-4132-b85b-47620f079539] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ngk2f" [90baf67d-3e7c-4132-b85b-47620f079539] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003623776s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-332815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-332815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m2.479494s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-332815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-332815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-332815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nsn8k" [2d171056-ab41-438d-85a6-f575972f9032] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004511724s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-332815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-332815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-25t5j" [ee17be70-2351-4cb8-9b54-6c7b77391344] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-25t5j" [ee17be70-2351-4cb8-9b54-6c7b77391344] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003925174s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-332815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-332815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-332815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (142.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-061226 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-061226 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m22.140925122s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (142.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (57.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-964942 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-964942 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (57.227992223s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (57.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-142717 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-142717 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (42.10430726s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-332815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-332815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5h8qh" [06b93d55-8785-4c1c-8045-b164789c9043] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5h8qh" [06b93d55-8785-4c1c-8045-b164789c9043] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004264076s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-332815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-332815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-332815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)
E0819 18:43:00.846231   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/custom-flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:00.951690   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:09.645742   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/auto-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:10.079526   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/bridge-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:10.085930   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/bridge-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:10.097262   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/bridge-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:10.118744   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/bridge-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:10.160090   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/bridge-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:10.241491   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/bridge-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:10.402863   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/bridge-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:10.724493   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/bridge-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:11.366598   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/bridge-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:12.648135   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/bridge-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:14.956844   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/kindnet-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:15.209740   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/bridge-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:20.331368   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/bridge-332815/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-083950 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-083950 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (24.994956893s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (25.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-142717 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e729e1f6-8e7f-4e2c-aa5c-20ddb321be4b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e729e1f6-8e7f-4e2c-aa5c-20ddb321be4b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003969287s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-142717 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-142717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-142717 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-142717 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-142717 --alsologtostderr -v=3: (11.909041386s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-964942 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2648e2e3-a105-4e12-a9df-4c96be7841dc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2648e2e3-a105-4e12-a9df-4c96be7841dc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003689129s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-964942 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142717 -n default-k8s-diff-port-142717
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142717 -n default-k8s-diff-port-142717: exit status 7 (63.342626ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-142717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-142717 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-142717 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m21.883968996s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-142717 -n default-k8s-diff-port-142717
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-964942 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-964942 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-964942 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-964942 --alsologtostderr -v=3: (11.911377602s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-083950 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-083950 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.061879165s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-083950 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-083950 --alsologtostderr -v=3: (1.193082457s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-083950 -n newest-cni-083950
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-083950 -n newest-cni-083950: exit status 7 (60.594715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-083950 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-083950 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-083950 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (12.664485109s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-083950 -n newest-cni-083950
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-964942 -n no-preload-964942
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-964942 -n no-preload-964942: exit status 7 (92.533697ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-964942 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (263.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-964942 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-964942 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m23.278598951s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-964942 -n no-preload-964942
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (263.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-083950 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-083950 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-083950 -n newest-cni-083950
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-083950 -n newest-cni-083950: exit status 2 (337.225388ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-083950 -n newest-cni-083950
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-083950 -n newest-cni-083950: exit status 2 (322.8368ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-083950 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-083950 -n newest-cni-083950
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-083950 -n newest-cni-083950
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (41.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-469884 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 18:39:42.067782   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-469884 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (41.232314736s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (41.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-061226 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b905f8b7-a344-4aa7-bf7d-ef4e43c2e8f2] Pending
helpers_test.go:344: "busybox" [b905f8b7-a344-4aa7-bf7d-ef4e43c2e8f2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b905f8b7-a344-4aa7-bf7d-ef4e43c2e8f2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003419969s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-061226 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-061226 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-061226 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-469884 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5a8c7f36-e063-4fe9-8a1d-323005791403] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5a8c7f36-e063-4fe9-8a1d-323005791403] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004276337s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-469884 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-061226 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-061226 --alsologtostderr -v=3: (11.936511408s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-469884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-469884 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (14.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-469884 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-469884 --alsologtostderr -v=3: (14.470871531s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (14.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-061226 -n old-k8s-version-061226
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-061226 -n old-k8s-version-061226: exit status 7 (62.526879ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-061226 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (139.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-061226 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0819 18:40:25.786297   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/auto-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:25.792680   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/auto-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:25.804037   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/auto-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:25.825384   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/auto-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:25.866962   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/auto-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:25.948636   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/auto-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:26.110036   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/auto-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:26.431725   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/auto-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:27.073430   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/auto-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:28.355188   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/auto-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:30.916728   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/auto-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:31.096486   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/kindnet-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:31.102868   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/kindnet-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:31.114304   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/kindnet-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:31.135749   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/kindnet-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:31.177593   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/kindnet-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:31.259799   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/kindnet-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:31.421234   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/kindnet-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:31.742671   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/kindnet-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:32.384854   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/kindnet-332815/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-061226 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m19.597365601s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-061226 -n old-k8s-version-061226
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (139.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-469884 -n embed-certs-469884
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-469884 -n embed-certs-469884: exit status 7 (66.46332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-469884 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-469884 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0
E0819 18:40:33.666192   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/kindnet-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:36.039012   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/auto-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:36.227918   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/kindnet-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:41.349255   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/kindnet-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:46.280731   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/auto-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:51.591279   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/kindnet-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:54.871342   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/calico-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:54.877758   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/calico-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:54.889194   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/calico-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:54.910892   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/calico-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:54.952499   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/calico-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:55.034130   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/calico-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:55.196277   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/calico-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:55.517982   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/calico-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:56.159866   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/calico-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:40:57.441604   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/calico-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:00.003647   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/calico-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:05.125042   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/calico-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:06.762467   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/auto-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:12.073389   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/kindnet-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:15.366936   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/calico-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:35.848493   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/calico-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:38.908077   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/custom-flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:38.914419   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/custom-flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:38.925792   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/custom-flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:38.947188   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/custom-flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:38.988556   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/custom-flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:39.069980   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/custom-flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:39.231479   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/custom-flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:39.553384   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/custom-flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:40.195615   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/custom-flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:41.477249   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/custom-flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:44.039519   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/custom-flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:47.724414   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/auto-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:49.161555   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/custom-flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:53.035410   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/kindnet-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:41:59.403288   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/custom-flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:07.273256   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/enable-default-cni-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:07.279652   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/enable-default-cni-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:07.291125   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/enable-default-cni-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:07.312480   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/enable-default-cni-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:07.354397   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/enable-default-cni-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:07.436280   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/enable-default-cni-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:07.598144   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/enable-default-cni-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:07.919456   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/enable-default-cni-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:08.561727   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/enable-default-cni-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:09.843589   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/enable-default-cni-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:12.405484   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/enable-default-cni-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:16.810354   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/calico-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:17.527399   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/enable-default-cni-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:19.884657   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/custom-flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:19.974062   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:19.980402   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:19.991756   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:20.013082   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:20.054404   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:20.135943   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:20.297618   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:20.619307   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:21.261509   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:22.543403   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:25.105646   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:27.769452   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/enable-default-cni-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:29.626405   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/functional-511891/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:30.227788   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-469884 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.0: (4m21.738601529s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-469884 -n embed-certs-469884
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-wxb2n" [3e795e51-f891-4de4-a20c-77ea926250e6] Running
E0819 18:42:40.469998   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:42:45.135539   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/addons-142951/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004196255s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-wxb2n" [3e795e51-f891-4de4-a20c-77ea926250e6] Running
E0819 18:42:48.251353   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/enable-default-cni-332815/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003682164s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-061226 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-061226 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-061226 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-061226 -n old-k8s-version-061226
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-061226 -n old-k8s-version-061226: exit status 2 (275.143503ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-061226 -n old-k8s-version-061226
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-061226 -n old-k8s-version-061226: exit status 2 (272.398041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-061226 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-061226 -n old-k8s-version-061226
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-061226 -n old-k8s-version-061226
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xlv6p" [0dcbe706-e227-41e1-afe2-3ffeee455ab7] Running
E0819 18:43:29.213558   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/enable-default-cni-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:30.572882   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/bridge-332815/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003754506s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xlv6p" [0dcbe706-e227-41e1-afe2-3ffeee455ab7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004120008s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-142717 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-142717 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-142717 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-142717 -n default-k8s-diff-port-142717
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-142717 -n default-k8s-diff-port-142717: exit status 2 (297.183713ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-142717 -n default-k8s-diff-port-142717
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-142717 -n default-k8s-diff-port-142717: exit status 2 (268.452389ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-142717 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-142717 -n default-k8s-diff-port-142717
E0819 18:43:38.732223   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/calico-332815/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-142717 -n default-k8s-diff-port-142717
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7lzjn" [38657981-7a4d-4891-b22e-64483c77f42c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003432312s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7lzjn" [38657981-7a4d-4891-b22e-64483c77f42c] Running
E0819 18:43:47.389436   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/default-k8s-diff-port-142717/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:51.055100   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/bridge-332815/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004087761s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-964942 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-964942 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-964942 --alsologtostderr -v=1
E0819 18:43:52.510934   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/default-k8s-diff-port-142717/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-964942 -n no-preload-964942
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-964942 -n no-preload-964942: exit status 2 (260.055085ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-964942 -n no-preload-964942
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-964942 -n no-preload-964942: exit status 2 (267.733657ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-964942 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-964942 -n no-preload-964942
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-964942 -n no-preload-964942
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-khct5" [01f1db7e-f1c1-4d73-aa91-0ca596f6dfc4] Running
E0819 18:44:59.279872   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/old-k8s-version-061226/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:59.286294   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/old-k8s-version-061226/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:59.297672   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/old-k8s-version-061226/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:59.319048   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/old-k8s-version-061226/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:59.360439   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/old-k8s-version-061226/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:59.441848   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/old-k8s-version-061226/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:59.603342   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/old-k8s-version-061226/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:59.925039   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/old-k8s-version-061226/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:45:00.566519   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/old-k8s-version-061226/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003864476s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-khct5" [01f1db7e-f1c1-4d73-aa91-0ca596f6dfc4] Running
E0819 18:45:01.848522   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/old-k8s-version-061226/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:45:03.834873   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/flannel-332815/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:45:04.195961   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/default-k8s-diff-port-142717/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:45:04.410371   30966 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19468-24160/.minikube/profiles/old-k8s-version-061226/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00400807s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-469884 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-469884 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-469884 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-469884 -n embed-certs-469884
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-469884 -n embed-certs-469884: exit status 2 (267.156898ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-469884 -n embed-certs-469884
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-469884 -n embed-certs-469884: exit status 2 (265.634582ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-469884 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-469884 -n embed-certs-469884
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-469884 -n embed-certs-469884
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.40s)

                                                
                                    

Test skip (25/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-332815 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-332815

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-332815

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-332815

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-332815

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-332815

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-332815

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-332815

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-332815

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-332815

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-332815

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-332815

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-332815" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-332815" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-332815

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-332815"

                                                
                                                
----------------------- debugLogs end: kubenet-332815 [took: 3.422207618s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-332815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-332815
--- SKIP: TestNetworkPlugins/group/kubenet (3.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-332815 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-332815

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-332815

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-332815

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-332815

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-332815

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-332815

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-332815

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-332815

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-332815

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-332815

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-332815

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-332815" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-332815

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-332815

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-332815

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-332815

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-332815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-332815" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-332815

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-332815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-332815"

                                                
                                                
----------------------- debugLogs end: cilium-332815 [took: 3.322685899s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-332815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-332815
--- SKIP: TestNetworkPlugins/group/cilium (3.46s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-156770" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-156770
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard