Test Report: Docker_Linux_crio_arm64 19166

                    
                      98210e04775e460720dbaecad9184210c804dd29:2024-07-01:35133
                    
                

Test fail (4/328)

Order failed test Duration
30 TestAddons/parallel/Ingress 168.35
32 TestAddons/parallel/MetricsServer 310.22
174 TestMultiControlPlane/serial/RestartCluster 128.3
302 TestStartStop/group/old-k8s-version/serial/SecondStart 379.23
x
+
TestAddons/parallel/Ingress (168.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-929335 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-929335 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-929335 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a51114ae-f317-4e29-9a84-d94be22aaa33] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a51114ae-f317-4e29-9a84-d94be22aaa33] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003234576s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-929335 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-929335 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.43971609s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-929335 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-929335 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:299: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.051244512s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:301: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:305: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-929335 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-929335 addons disable ingress-dns --alsologtostderr -v=1: (1.45962679s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-929335 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-929335 addons disable ingress --alsologtostderr -v=1: (7.718181373s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-929335
helpers_test.go:235: (dbg) docker inspect addons-929335:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "65d5b1b0f7f0a682360538e2c6e7aef6cb9883c8df68d86ed4a42999dd208771",
	        "Created": "2024-07-01T14:15:52.872318521Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3714989,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-01T14:15:53.002432627Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cf53f54b1bed0b432ebf08c6ac817bec062867b90e25c5452b8e7c3276a7ff",
	        "ResolvConfPath": "/var/lib/docker/containers/65d5b1b0f7f0a682360538e2c6e7aef6cb9883c8df68d86ed4a42999dd208771/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/65d5b1b0f7f0a682360538e2c6e7aef6cb9883c8df68d86ed4a42999dd208771/hostname",
	        "HostsPath": "/var/lib/docker/containers/65d5b1b0f7f0a682360538e2c6e7aef6cb9883c8df68d86ed4a42999dd208771/hosts",
	        "LogPath": "/var/lib/docker/containers/65d5b1b0f7f0a682360538e2c6e7aef6cb9883c8df68d86ed4a42999dd208771/65d5b1b0f7f0a682360538e2c6e7aef6cb9883c8df68d86ed4a42999dd208771-json.log",
	        "Name": "/addons-929335",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-929335:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-929335",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/df5a9c6cc6b85d0a05bebe77804bb3f6909353b546779825eac1ac22d05fbeca-init/diff:/var/lib/docker/overlay2/c3139abb5cf1c83f6f12f6a5f4a9c8df468321ed41d6e455d104ebf4c7d8657d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/df5a9c6cc6b85d0a05bebe77804bb3f6909353b546779825eac1ac22d05fbeca/merged",
	                "UpperDir": "/var/lib/docker/overlay2/df5a9c6cc6b85d0a05bebe77804bb3f6909353b546779825eac1ac22d05fbeca/diff",
	                "WorkDir": "/var/lib/docker/overlay2/df5a9c6cc6b85d0a05bebe77804bb3f6909353b546779825eac1ac22d05fbeca/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-929335",
	                "Source": "/var/lib/docker/volumes/addons-929335/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-929335",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-929335",
	                "name.minikube.sigs.k8s.io": "addons-929335",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "91b7a1a3f91955bf00fc9baba8b0810bc5106999d954fb2e040046ed7247965a",
	            "SandboxKey": "/var/run/docker/netns/91b7a1a3f919",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-929335": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "259bced3400858624c0bb065c3728c922a312012e2621de527c2f60710e627ba",
	                    "EndpointID": "4acca51483063e8b4c164e723fcc1e17a204ecb0fe40713da9b89841755a0227",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-929335",
	                        "65d5b1b0f7f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-929335 -n addons-929335
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-929335 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-929335 logs -n 25: (1.423599843s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-789626   | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC |                     |
	|         | -p download-only-789626                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC | 01 Jul 24 14:15 UTC |
	| delete  | -p download-only-789626                                                                     | download-only-789626   | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC | 01 Jul 24 14:15 UTC |
	| delete  | -p download-only-281343                                                                     | download-only-281343   | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC | 01 Jul 24 14:15 UTC |
	| delete  | -p download-only-789626                                                                     | download-only-789626   | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC | 01 Jul 24 14:15 UTC |
	| start   | --download-only -p                                                                          | download-docker-822470 | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC |                     |
	|         | download-docker-822470                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-822470                                                                   | download-docker-822470 | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC | 01 Jul 24 14:15 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-147566   | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC |                     |
	|         | binary-mirror-147566                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:42755                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-147566                                                                     | binary-mirror-147566   | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC | 01 Jul 24 14:15 UTC |
	| addons  | disable dashboard -p                                                                        | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC |                     |
	|         | addons-929335                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC |                     |
	|         | addons-929335                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-929335 --wait=true                                                                | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC | 01 Jul 24 14:19 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:19 UTC | 01 Jul 24 14:19 UTC |
	|         | -p addons-929335                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-929335 ip                                                                            | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:19 UTC | 01 Jul 24 14:19 UTC |
	| addons  | addons-929335 addons disable                                                                | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:19 UTC | 01 Jul 24 14:19 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:19 UTC | 01 Jul 24 14:19 UTC |
	|         | -p addons-929335                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-929335 ssh cat                                                                       | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:19 UTC | 01 Jul 24 14:19 UTC |
	|         | /opt/local-path-provisioner/pvc-c612bd66-1de9-4129-954e-9710bab6cabd_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-929335 addons disable                                                                | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:19 UTC | 01 Jul 24 14:20 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:19 UTC | 01 Jul 24 14:19 UTC |
	|         | addons-929335                                                                               |                        |         |         |                     |                     |
	| addons  | addons-929335 addons                                                                        | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:20 UTC | 01 Jul 24 14:21 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-929335 addons                                                                        | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:21 UTC | 01 Jul 24 14:21 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-929335 ssh curl -s                                                                   | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:21 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-929335 ip                                                                            | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:23 UTC | 01 Jul 24 14:23 UTC |
	| addons  | addons-929335 addons disable                                                                | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:23 UTC | 01 Jul 24 14:23 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-929335 addons disable                                                                | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:23 UTC | 01 Jul 24 14:23 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/01 14:15:28
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 14:15:28.076046 3714493 out.go:291] Setting OutFile to fd 1 ...
	I0701 14:15:28.076184 3714493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:15:28.076194 3714493 out.go:304] Setting ErrFile to fd 2...
	I0701 14:15:28.076199 3714493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:15:28.076488 3714493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-3708336/.minikube/bin
	I0701 14:15:28.076935 3714493 out.go:298] Setting JSON to false
	I0701 14:15:28.077905 3714493 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":165479,"bootTime":1719677849,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0701 14:15:28.077979 3714493 start.go:139] virtualization:  
	I0701 14:15:28.080375 3714493 out.go:177] * [addons-929335] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0701 14:15:28.082871 3714493 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 14:15:28.083067 3714493 notify.go:220] Checking for updates...
	I0701 14:15:28.087089 3714493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 14:15:28.088848 3714493 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19166-3708336/kubeconfig
	I0701 14:15:28.090827 3714493 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-3708336/.minikube
	I0701 14:15:28.092706 3714493 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0701 14:15:28.094518 3714493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 14:15:28.096503 3714493 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 14:15:28.126501 3714493 docker.go:122] docker version: linux-27.0.3:Docker Engine - Community
	I0701 14:15:28.126602 3714493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 14:15:28.182184 3714493 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-01 14:15:28.173117927 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0701 14:15:28.182300 3714493 docker.go:295] overlay module found
	I0701 14:15:28.185140 3714493 out.go:177] * Using the docker driver based on user configuration
	I0701 14:15:28.187141 3714493 start.go:297] selected driver: docker
	I0701 14:15:28.187158 3714493 start.go:901] validating driver "docker" against <nil>
	I0701 14:15:28.187172 3714493 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 14:15:28.187823 3714493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 14:15:28.236319 3714493 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-01 14:15:28.227485738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0701 14:15:28.236487 3714493 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 14:15:28.236711 3714493 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 14:15:28.238778 3714493 out.go:177] * Using Docker driver with root privileges
	I0701 14:15:28.240814 3714493 cni.go:84] Creating CNI manager for ""
	I0701 14:15:28.240836 3714493 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0701 14:15:28.240847 3714493 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0701 14:15:28.240929 3714493 start.go:340] cluster config:
	{Name:addons-929335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-929335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 14:15:28.243318 3714493 out.go:177] * Starting "addons-929335" primary control-plane node in "addons-929335" cluster
	I0701 14:15:28.245071 3714493 cache.go:121] Beginning downloading kic base image for docker with crio
	I0701 14:15:28.246947 3714493 out.go:177] * Pulling base image v0.0.44-1719413016-19142 ...
	I0701 14:15:28.249160 3714493 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0701 14:15:28.249212 3714493 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4
	I0701 14:15:28.249220 3714493 cache.go:56] Caching tarball of preloaded images
	I0701 14:15:28.249256 3714493 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local docker daemon
	I0701 14:15:28.249297 3714493 preload.go:173] Found /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0701 14:15:28.249306 3714493 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0701 14:15:28.249651 3714493 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/config.json ...
	I0701 14:15:28.249670 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/config.json: {Name:mkf278bfd2d5e50e84cb1fa4b086afbb0de93b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:15:28.266248 3714493 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d to local cache
	I0701 14:15:28.266359 3714493 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local cache directory
	I0701 14:15:28.266381 3714493 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local cache directory, skipping pull
	I0701 14:15:28.266387 3714493 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d exists in cache, skipping pull
	I0701 14:15:28.266395 3714493 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d as a tarball
	I0701 14:15:28.266400 3714493 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d from local cache
	I0701 14:15:44.884731 3714493 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d from cached tarball
	I0701 14:15:44.884774 3714493 cache.go:194] Successfully downloaded all kic artifacts
	I0701 14:15:44.884828 3714493 start.go:360] acquireMachinesLock for addons-929335: {Name:mka8f5764327253860363894b4c32861892f785a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 14:15:44.885530 3714493 start.go:364] duration metric: took 673.977µs to acquireMachinesLock for "addons-929335"
	I0701 14:15:44.885574 3714493 start.go:93] Provisioning new machine with config: &{Name:addons-929335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-929335 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0701 14:15:44.885672 3714493 start.go:125] createHost starting for "" (driver="docker")
	I0701 14:15:44.888037 3714493 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0701 14:15:44.888284 3714493 start.go:159] libmachine.API.Create for "addons-929335" (driver="docker")
	I0701 14:15:44.888327 3714493 client.go:168] LocalClient.Create starting
	I0701 14:15:44.888446 3714493 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem
	I0701 14:15:45.979184 3714493 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem
	I0701 14:15:46.514386 3714493 cli_runner.go:164] Run: docker network inspect addons-929335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0701 14:15:46.529891 3714493 cli_runner.go:211] docker network inspect addons-929335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0701 14:15:46.529971 3714493 network_create.go:284] running [docker network inspect addons-929335] to gather additional debugging logs...
	I0701 14:15:46.529992 3714493 cli_runner.go:164] Run: docker network inspect addons-929335
	W0701 14:15:46.544743 3714493 cli_runner.go:211] docker network inspect addons-929335 returned with exit code 1
	I0701 14:15:46.544780 3714493 network_create.go:287] error running [docker network inspect addons-929335]: docker network inspect addons-929335: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-929335 not found
	I0701 14:15:46.544794 3714493 network_create.go:289] output of [docker network inspect addons-929335]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-929335 not found
	
	** /stderr **
	I0701 14:15:46.544888 3714493 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 14:15:46.559405 3714493 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40006d6ce0}
	I0701 14:15:46.559447 3714493 network_create.go:124] attempt to create docker network addons-929335 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0701 14:15:46.559503 3714493 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-929335 addons-929335
	I0701 14:15:46.627461 3714493 network_create.go:108] docker network addons-929335 192.168.49.0/24 created
	I0701 14:15:46.627495 3714493 kic.go:121] calculated static IP "192.168.49.2" for the "addons-929335" container
	I0701 14:15:46.627572 3714493 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0701 14:15:46.642027 3714493 cli_runner.go:164] Run: docker volume create addons-929335 --label name.minikube.sigs.k8s.io=addons-929335 --label created_by.minikube.sigs.k8s.io=true
	I0701 14:15:46.657570 3714493 oci.go:103] Successfully created a docker volume addons-929335
	I0701 14:15:46.657677 3714493 cli_runner.go:164] Run: docker run --rm --name addons-929335-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-929335 --entrypoint /usr/bin/test -v addons-929335:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d -d /var/lib
	I0701 14:15:48.682208 3714493 cli_runner.go:217] Completed: docker run --rm --name addons-929335-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-929335 --entrypoint /usr/bin/test -v addons-929335:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d -d /var/lib: (2.024472652s)
	I0701 14:15:48.682238 3714493 oci.go:107] Successfully prepared a docker volume addons-929335
	I0701 14:15:48.682259 3714493 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0701 14:15:48.682278 3714493 kic.go:194] Starting extracting preloaded images to volume ...
	I0701 14:15:48.682364 3714493 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-929335:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d -I lz4 -xf /preloaded.tar -C /extractDir
	I0701 14:15:52.802232 3714493 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-929335:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d -I lz4 -xf /preloaded.tar -C /extractDir: (4.119827228s)
	I0701 14:15:52.802261 3714493 kic.go:203] duration metric: took 4.119980854s to extract preloaded images to volume ...
	W0701 14:15:52.802396 3714493 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0701 14:15:52.802531 3714493 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0701 14:15:52.858934 3714493 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-929335 --name addons-929335 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-929335 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-929335 --network addons-929335 --ip 192.168.49.2 --volume addons-929335:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d
	I0701 14:15:53.183631 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Running}}
	I0701 14:15:53.205298 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:15:53.229244 3714493 cli_runner.go:164] Run: docker exec addons-929335 stat /var/lib/dpkg/alternatives/iptables
	I0701 14:15:53.297257 3714493 oci.go:144] the created container "addons-929335" has a running status.
	I0701 14:15:53.297289 3714493 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa...
	I0701 14:15:53.580631 3714493 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0701 14:15:53.611570 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:15:53.637407 3714493 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0701 14:15:53.637425 3714493 kic_runner.go:114] Args: [docker exec --privileged addons-929335 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0701 14:15:53.724718 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:15:53.748074 3714493 machine.go:94] provisionDockerMachine start ...
	I0701 14:15:53.748176 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:15:53.783926 3714493 main.go:141] libmachine: Using SSH client type: native
	I0701 14:15:53.784204 3714493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I0701 14:15:53.784213 3714493 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 14:15:53.785004 3714493 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45302->127.0.0.1:33900: read: connection reset by peer
	I0701 14:15:56.924633 3714493 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-929335
	
	I0701 14:15:56.924658 3714493 ubuntu.go:169] provisioning hostname "addons-929335"
	I0701 14:15:56.924726 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:15:56.942045 3714493 main.go:141] libmachine: Using SSH client type: native
	I0701 14:15:56.942278 3714493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I0701 14:15:56.942292 3714493 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-929335 && echo "addons-929335" | sudo tee /etc/hostname
	I0701 14:15:57.096923 3714493 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-929335
	
	I0701 14:15:57.097050 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:15:57.112810 3714493 main.go:141] libmachine: Using SSH client type: native
	I0701 14:15:57.113273 3714493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I0701 14:15:57.113306 3714493 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-929335' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-929335/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-929335' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 14:15:57.253118 3714493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 14:15:57.253142 3714493 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19166-3708336/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-3708336/.minikube}
	I0701 14:15:57.253171 3714493 ubuntu.go:177] setting up certificates
	I0701 14:15:57.253181 3714493 provision.go:84] configureAuth start
	I0701 14:15:57.253247 3714493 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-929335
	I0701 14:15:57.269614 3714493 provision.go:143] copyHostCerts
	I0701 14:15:57.269693 3714493 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-3708336/.minikube/key.pem (1675 bytes)
	I0701 14:15:57.269810 3714493 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.pem (1082 bytes)
	I0701 14:15:57.269864 3714493 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-3708336/.minikube/cert.pem (1123 bytes)
	I0701 14:15:57.269907 3714493 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca-key.pem org=jenkins.addons-929335 san=[127.0.0.1 192.168.49.2 addons-929335 localhost minikube]
	I0701 14:15:57.445909 3714493 provision.go:177] copyRemoteCerts
	I0701 14:15:57.445977 3714493 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 14:15:57.446051 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:15:57.461828 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:15:57.558793 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0701 14:15:57.582147 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0701 14:15:57.606316 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 14:15:57.630034 3714493 provision.go:87] duration metric: took 376.838475ms to configureAuth
	I0701 14:15:57.630067 3714493 ubuntu.go:193] setting minikube options for container-runtime
	I0701 14:15:57.630255 3714493 config.go:182] Loaded profile config "addons-929335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:15:57.630363 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:15:57.647176 3714493 main.go:141] libmachine: Using SSH client type: native
	I0701 14:15:57.647417 3714493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I0701 14:15:57.647433 3714493 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0701 14:15:57.887076 3714493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0701 14:15:57.887098 3714493 machine.go:97] duration metric: took 4.138997162s to provisionDockerMachine
	I0701 14:15:57.887108 3714493 client.go:171] duration metric: took 12.998770826s to LocalClient.Create
	I0701 14:15:57.887120 3714493 start.go:167] duration metric: took 12.998836648s to libmachine.API.Create "addons-929335"
	I0701 14:15:57.887128 3714493 start.go:293] postStartSetup for "addons-929335" (driver="docker")
	I0701 14:15:57.887139 3714493 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 14:15:57.887203 3714493 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 14:15:57.887243 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:15:57.904168 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:15:58.003397 3714493 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 14:15:58.007511 3714493 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 14:15:58.007550 3714493 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 14:15:58.007561 3714493 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 14:15:58.007571 3714493 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0701 14:15:58.007583 3714493 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-3708336/.minikube/addons for local assets ...
	I0701 14:15:58.007664 3714493 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-3708336/.minikube/files for local assets ...
	I0701 14:15:58.007690 3714493 start.go:296] duration metric: took 120.556462ms for postStartSetup
	I0701 14:15:58.008028 3714493 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-929335
	I0701 14:15:58.028022 3714493 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/config.json ...
	I0701 14:15:58.028321 3714493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 14:15:58.028373 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:15:58.044783 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:15:58.137813 3714493 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 14:15:58.142149 3714493 start.go:128] duration metric: took 13.256462134s to createHost
	I0701 14:15:58.142172 3714493 start.go:83] releasing machines lock for "addons-929335", held for 13.256621323s
	I0701 14:15:58.142244 3714493 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-929335
	I0701 14:15:58.158586 3714493 ssh_runner.go:195] Run: cat /version.json
	I0701 14:15:58.158649 3714493 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 14:15:58.158723 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:15:58.158650 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:15:58.182111 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:15:58.186181 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:15:58.401037 3714493 ssh_runner.go:195] Run: systemctl --version
	I0701 14:15:58.405032 3714493 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0701 14:15:58.547228 3714493 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 14:15:58.551572 3714493 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 14:15:58.573446 3714493 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0701 14:15:58.573521 3714493 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 14:15:58.608554 3714493 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0701 14:15:58.608579 3714493 start.go:494] detecting cgroup driver to use...
	I0701 14:15:58.608613 3714493 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0701 14:15:58.608664 3714493 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 14:15:58.625680 3714493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 14:15:58.637518 3714493 docker.go:217] disabling cri-docker service (if available) ...
	I0701 14:15:58.637632 3714493 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0701 14:15:58.653897 3714493 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0701 14:15:58.669193 3714493 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0701 14:15:58.758868 3714493 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0701 14:15:58.857813 3714493 docker.go:233] disabling docker service ...
	I0701 14:15:58.857883 3714493 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 14:15:58.878267 3714493 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 14:15:58.890657 3714493 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 14:15:58.981579 3714493 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 14:15:59.067544 3714493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 14:15:59.079292 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 14:15:59.095373 3714493 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0701 14:15:59.095482 3714493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:15:59.105296 3714493 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0701 14:15:59.105417 3714493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:15:59.115131 3714493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:15:59.125192 3714493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:15:59.134917 3714493 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 14:15:59.143807 3714493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:15:59.153513 3714493 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:15:59.169649 3714493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:15:59.179480 3714493 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 14:15:59.188286 3714493 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 14:15:59.196798 3714493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 14:15:59.285945 3714493 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0701 14:15:59.399846 3714493 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0701 14:15:59.399979 3714493 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0701 14:15:59.403562 3714493 start.go:562] Will wait 60s for crictl version
	I0701 14:15:59.403670 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:15:59.407101 3714493 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 14:15:59.446266 3714493 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0701 14:15:59.446410 3714493 ssh_runner.go:195] Run: crio --version
	I0701 14:15:59.482013 3714493 ssh_runner.go:195] Run: crio --version
	I0701 14:15:59.520538 3714493 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.24.6 ...
	I0701 14:15:59.522688 3714493 cli_runner.go:164] Run: docker network inspect addons-929335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 14:15:59.540151 3714493 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0701 14:15:59.543936 3714493 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 14:15:59.554647 3714493 kubeadm.go:877] updating cluster {Name:addons-929335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-929335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0701 14:15:59.554777 3714493 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0701 14:15:59.554851 3714493 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 14:15:59.632151 3714493 crio.go:514] all images are preloaded for cri-o runtime.
	I0701 14:15:59.632172 3714493 crio.go:433] Images already preloaded, skipping extraction
	I0701 14:15:59.632229 3714493 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 14:15:59.670022 3714493 crio.go:514] all images are preloaded for cri-o runtime.
	I0701 14:15:59.670046 3714493 cache_images.go:84] Images are preloaded, skipping loading
	I0701 14:15:59.670055 3714493 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.2 crio true true} ...
	I0701 14:15:59.670151 3714493 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-929335 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-929335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 14:15:59.670241 3714493 ssh_runner.go:195] Run: crio config
	I0701 14:15:59.730366 3714493 cni.go:84] Creating CNI manager for ""
	I0701 14:15:59.730391 3714493 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0701 14:15:59.730404 3714493 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0701 14:15:59.730453 3714493 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-929335 NodeName:addons-929335 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0701 14:15:59.730622 3714493 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-929335"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 14:15:59.730697 3714493 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 14:15:59.739813 3714493 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 14:15:59.739905 3714493 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 14:15:59.748805 3714493 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0701 14:15:59.767433 3714493 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 14:15:59.785706 3714493 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0701 14:15:59.803590 3714493 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0701 14:15:59.807149 3714493 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 14:15:59.818038 3714493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 14:15:59.908866 3714493 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 14:15:59.922064 3714493 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335 for IP: 192.168.49.2
	I0701 14:15:59.922087 3714493 certs.go:194] generating shared ca certs ...
	I0701 14:15:59.922105 3714493 certs.go:226] acquiring lock for ca certs: {Name:mkef61a10d340f62d4856e4c226678a7bd970ee7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:15:59.922277 3714493 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.key
	I0701 14:16:00.634062 3714493 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt ...
	I0701 14:16:00.634098 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt: {Name:mk7cc0d70948e4ed02cc6b03bd67d2393f1761b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:00.634305 3714493 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.key ...
	I0701 14:16:00.634318 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.key: {Name:mk742c097069fed85f84c630a04fca6422948097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:00.634927 3714493 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.key
	I0701 14:16:00.912069 3714493 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.crt ...
	I0701 14:16:00.912101 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.crt: {Name:mk1111ab1a13b413b69bba0d83843e569a4ce1dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:00.912297 3714493 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.key ...
	I0701 14:16:00.912309 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.key: {Name:mk6c1055d7fb8e3ff12df62e15d12d33ae8610e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:00.912875 3714493 certs.go:256] generating profile certs ...
	I0701 14:16:00.912941 3714493 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.key
	I0701 14:16:00.912960 3714493 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt with IP's: []
	I0701 14:16:01.699178 3714493 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt ...
	I0701 14:16:01.699223 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: {Name:mk182fca4cc3e0c307e79f2cccfa26f18a3683d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:01.699418 3714493 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.key ...
	I0701 14:16:01.699432 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.key: {Name:mk5d3500d252311b100e5282c81fd294ecbf86e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:01.699520 3714493 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.key.37fc1b00
	I0701 14:16:01.699541 3714493 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.crt.37fc1b00 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0701 14:16:01.926771 3714493 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.crt.37fc1b00 ...
	I0701 14:16:01.926806 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.crt.37fc1b00: {Name:mk4c0a011c17e9899ed7224593d62a91b67e2f9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:01.927001 3714493 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.key.37fc1b00 ...
	I0701 14:16:01.927018 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.key.37fc1b00: {Name:mk3308ef7ff745f4831105113fa3d3c9402f03bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:01.927113 3714493 certs.go:381] copying /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.crt.37fc1b00 -> /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.crt
	I0701 14:16:01.927192 3714493 certs.go:385] copying /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.key.37fc1b00 -> /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.key
	I0701 14:16:01.927252 3714493 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/proxy-client.key
	I0701 14:16:01.927270 3714493 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/proxy-client.crt with IP's: []
	I0701 14:16:02.398156 3714493 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/proxy-client.crt ...
	I0701 14:16:02.398190 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/proxy-client.crt: {Name:mkd44519905e4e13028e4ac695d33cab5461d876 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:02.398849 3714493 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/proxy-client.key ...
	I0701 14:16:02.398871 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/proxy-client.key: {Name:mk215786bde510015bc46c4b221b63c4f8549acc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:02.399579 3714493 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 14:16:02.399625 3714493 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem (1082 bytes)
	I0701 14:16:02.399660 3714493 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem (1123 bytes)
	I0701 14:16:02.399694 3714493 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem (1675 bytes)
	I0701 14:16:02.400316 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 14:16:02.425817 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 14:16:02.450155 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 14:16:02.473291 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 14:16:02.496432 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0701 14:16:02.520341 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0701 14:16:02.544457 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 14:16:02.568333 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 14:16:02.592203 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 14:16:02.617084 3714493 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 14:16:02.634812 3714493 ssh_runner.go:195] Run: openssl version
	I0701 14:16:02.640385 3714493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 14:16:02.650103 3714493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 14:16:02.653730 3714493 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 14:16 /usr/share/ca-certificates/minikubeCA.pem
	I0701 14:16:02.653797 3714493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 14:16:02.660622 3714493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 14:16:02.670406 3714493 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 14:16:02.673869 3714493 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0701 14:16:02.673917 3714493 kubeadm.go:391] StartCluster: {Name:addons-929335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-929335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 14:16:02.674002 3714493 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0701 14:16:02.674064 3714493 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 14:16:02.710117 3714493 cri.go:89] found id: ""
	I0701 14:16:02.710221 3714493 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 14:16:02.718858 3714493 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 14:16:02.727650 3714493 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0701 14:16:02.727761 3714493 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 14:16:02.736365 3714493 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 14:16:02.736386 3714493 kubeadm.go:156] found existing configuration files:
	
	I0701 14:16:02.736461 3714493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0701 14:16:02.745176 3714493 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0701 14:16:02.745306 3714493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0701 14:16:02.753657 3714493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0701 14:16:02.762104 3714493 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0701 14:16:02.762188 3714493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0701 14:16:02.770704 3714493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0701 14:16:02.779144 3714493 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0701 14:16:02.779247 3714493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 14:16:02.787321 3714493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0701 14:16:02.795832 3714493 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0701 14:16:02.795918 3714493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 14:16:02.803930 3714493 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0701 14:16:02.851829 3714493 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0701 14:16:02.852056 3714493 kubeadm.go:309] [preflight] Running pre-flight checks
	I0701 14:16:02.910180 3714493 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0701 14:16:02.910294 3714493 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1063-aws
	I0701 14:16:02.910379 3714493 kubeadm.go:309] OS: Linux
	I0701 14:16:02.910445 3714493 kubeadm.go:309] CGROUPS_CPU: enabled
	I0701 14:16:02.910528 3714493 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0701 14:16:02.910592 3714493 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0701 14:16:02.910699 3714493 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0701 14:16:02.910770 3714493 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0701 14:16:02.910834 3714493 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0701 14:16:02.910883 3714493 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0701 14:16:02.910944 3714493 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0701 14:16:02.910996 3714493 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0701 14:16:02.978221 3714493 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0701 14:16:02.978409 3714493 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0701 14:16:02.978541 3714493 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0701 14:16:03.199067 3714493 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0701 14:16:03.202966 3714493 out.go:204]   - Generating certificates and keys ...
	I0701 14:16:03.203082 3714493 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0701 14:16:03.203164 3714493 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0701 14:16:03.668055 3714493 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0701 14:16:04.509348 3714493 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0701 14:16:05.464755 3714493 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0701 14:16:05.879208 3714493 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0701 14:16:06.586814 3714493 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0701 14:16:06.586967 3714493 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-929335 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0701 14:16:07.565364 3714493 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0701 14:16:07.565521 3714493 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-929335 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0701 14:16:07.718316 3714493 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0701 14:16:08.047750 3714493 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0701 14:16:08.571041 3714493 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0701 14:16:08.571352 3714493 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0701 14:16:08.749871 3714493 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0701 14:16:09.147640 3714493 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0701 14:16:09.543890 3714493 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0701 14:16:09.896370 3714493 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0701 14:16:10.361785 3714493 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0701 14:16:10.362685 3714493 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0701 14:16:10.365769 3714493 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0701 14:16:10.368028 3714493 out.go:204]   - Booting up control plane ...
	I0701 14:16:10.368160 3714493 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0701 14:16:10.368242 3714493 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0701 14:16:10.369305 3714493 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0701 14:16:10.386119 3714493 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0701 14:16:10.387516 3714493 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0701 14:16:10.387738 3714493 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0701 14:16:10.488045 3714493 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0701 14:16:10.488132 3714493 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0701 14:16:11.989612 3714493 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.501644867s
	I0701 14:16:11.989707 3714493 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0701 14:16:18.991274 3714493 kubeadm.go:309] [api-check] The API server is healthy after 7.001606639s
	I0701 14:16:19.011022 3714493 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0701 14:16:19.027312 3714493 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0701 14:16:19.083253 3714493 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0701 14:16:19.083446 3714493 kubeadm.go:309] [mark-control-plane] Marking the node addons-929335 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0701 14:16:19.101990 3714493 kubeadm.go:309] [bootstrap-token] Using token: ypqvhc.tyl4o7d4g0682pi9
	I0701 14:16:19.104330 3714493 out.go:204]   - Configuring RBAC rules ...
	I0701 14:16:19.104455 3714493 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0701 14:16:19.108412 3714493 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0701 14:16:19.121674 3714493 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0701 14:16:19.126021 3714493 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0701 14:16:19.130540 3714493 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0701 14:16:19.136155 3714493 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0701 14:16:19.397600 3714493 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0701 14:16:19.842548 3714493 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0701 14:16:20.402445 3714493 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0701 14:16:20.403731 3714493 kubeadm.go:309] 
	I0701 14:16:20.403804 3714493 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0701 14:16:20.403814 3714493 kubeadm.go:309] 
	I0701 14:16:20.403888 3714493 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0701 14:16:20.403897 3714493 kubeadm.go:309] 
	I0701 14:16:20.403922 3714493 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0701 14:16:20.403982 3714493 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0701 14:16:20.404035 3714493 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0701 14:16:20.404044 3714493 kubeadm.go:309] 
	I0701 14:16:20.404096 3714493 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0701 14:16:20.404104 3714493 kubeadm.go:309] 
	I0701 14:16:20.404150 3714493 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0701 14:16:20.404158 3714493 kubeadm.go:309] 
	I0701 14:16:20.404209 3714493 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0701 14:16:20.404284 3714493 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0701 14:16:20.404353 3714493 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0701 14:16:20.404362 3714493 kubeadm.go:309] 
	I0701 14:16:20.404443 3714493 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0701 14:16:20.404520 3714493 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0701 14:16:20.404529 3714493 kubeadm.go:309] 
	I0701 14:16:20.404609 3714493 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ypqvhc.tyl4o7d4g0682pi9 \
	I0701 14:16:20.404711 3714493 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:147605410e7daebab3b068442614c3748ab53b9f1af728ca2913c2913dc90190 \
	I0701 14:16:20.404735 3714493 kubeadm.go:309] 	--control-plane 
	I0701 14:16:20.404744 3714493 kubeadm.go:309] 
	I0701 14:16:20.404825 3714493 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0701 14:16:20.404833 3714493 kubeadm.go:309] 
	I0701 14:16:20.404935 3714493 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ypqvhc.tyl4o7d4g0682pi9 \
	I0701 14:16:20.405053 3714493 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:147605410e7daebab3b068442614c3748ab53b9f1af728ca2913c2913dc90190 
	I0701 14:16:20.408632 3714493 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1063-aws\n", err: exit status 1
	I0701 14:16:20.408751 3714493 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0701 14:16:20.408772 3714493 cni.go:84] Creating CNI manager for ""
	I0701 14:16:20.408783 3714493 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0701 14:16:20.410804 3714493 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 14:16:20.412471 3714493 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 14:16:20.416311 3714493 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0701 14:16:20.416332 3714493 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0701 14:16:20.437079 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 14:16:20.711020 3714493 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 14:16:20.711169 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:20.711267 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-929335 minikube.k8s.io/updated_at=2024_07_01T14_16_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c minikube.k8s.io/name=addons-929335 minikube.k8s.io/primary=true
	I0701 14:16:20.724072 3714493 ops.go:34] apiserver oom_adj: -16
	I0701 14:16:20.866671 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:21.367706 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:21.866828 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:22.366823 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:22.867571 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:23.367405 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:23.866798 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:24.367603 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:24.867298 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:25.366818 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:25.867502 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:26.367284 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:26.867540 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:27.367701 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:27.867733 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:28.366856 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:28.866903 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:29.367275 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:29.867547 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:30.367587 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:30.866995 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:31.367733 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:31.867506 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:32.367261 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:32.867677 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:33.367224 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:33.867549 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:34.017778 3714493 kubeadm.go:1107] duration metric: took 13.306658129s to wait for elevateKubeSystemPrivileges
	W0701 14:16:34.017812 3714493 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0701 14:16:34.017820 3714493 kubeadm.go:393] duration metric: took 31.343907492s to StartCluster
	I0701 14:16:34.017837 3714493 settings.go:142] acquiring lock: {Name:mke9008d6920f4be65eddeda5d60c738ed3823ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:34.017959 3714493 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19166-3708336/kubeconfig
	I0701 14:16:34.018393 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/kubeconfig: {Name:mk4d5838a81c57a1d9ec9a509328664588dd34aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:34.018605 3714493 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0701 14:16:34.018715 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 14:16:34.018992 3714493 config.go:182] Loaded profile config "addons-929335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:16:34.019024 3714493 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0701 14:16:34.019110 3714493 addons.go:69] Setting yakd=true in profile "addons-929335"
	I0701 14:16:34.019132 3714493 addons.go:234] Setting addon yakd=true in "addons-929335"
	I0701 14:16:34.019156 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.019627 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.020110 3714493 addons.go:69] Setting metrics-server=true in profile "addons-929335"
	I0701 14:16:34.020142 3714493 addons.go:234] Setting addon metrics-server=true in "addons-929335"
	I0701 14:16:34.020177 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.020618 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.020780 3714493 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-929335"
	I0701 14:16:34.020808 3714493 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-929335"
	I0701 14:16:34.020834 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.021383 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.021779 3714493 addons.go:69] Setting cloud-spanner=true in profile "addons-929335"
	I0701 14:16:34.021811 3714493 addons.go:234] Setting addon cloud-spanner=true in "addons-929335"
	I0701 14:16:34.021839 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.022440 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.025353 3714493 addons.go:69] Setting registry=true in profile "addons-929335"
	I0701 14:16:34.025396 3714493 addons.go:234] Setting addon registry=true in "addons-929335"
	I0701 14:16:34.025430 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.025874 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.026037 3714493 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-929335"
	I0701 14:16:34.026084 3714493 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-929335"
	I0701 14:16:34.026106 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.026494 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.045571 3714493 addons.go:69] Setting default-storageclass=true in profile "addons-929335"
	I0701 14:16:34.045635 3714493 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-929335"
	I0701 14:16:34.045827 3714493 addons.go:69] Setting storage-provisioner=true in profile "addons-929335"
	I0701 14:16:34.045852 3714493 addons.go:234] Setting addon storage-provisioner=true in "addons-929335"
	I0701 14:16:34.045887 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.046377 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.047538 3714493 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-929335"
	I0701 14:16:34.047635 3714493 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-929335"
	I0701 14:16:34.048754 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.065900 3714493 addons.go:69] Setting volcano=true in profile "addons-929335"
	I0701 14:16:34.066023 3714493 addons.go:234] Setting addon volcano=true in "addons-929335"
	I0701 14:16:34.066100 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.066675 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.046588 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.046596 3714493 addons.go:69] Setting gcp-auth=true in profile "addons-929335"
	I0701 14:16:34.079946 3714493 mustload.go:65] Loading cluster: addons-929335
	I0701 14:16:34.080178 3714493 config.go:182] Loaded profile config "addons-929335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:16:34.046603 3714493 addons.go:69] Setting ingress=true in profile "addons-929335"
	I0701 14:16:34.092598 3714493 addons.go:234] Setting addon ingress=true in "addons-929335"
	I0701 14:16:34.097387 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.046607 3714493 addons.go:69] Setting ingress-dns=true in profile "addons-929335"
	I0701 14:16:34.046613 3714493 addons.go:69] Setting inspektor-gadget=true in profile "addons-929335"
	I0701 14:16:34.046680 3714493 out.go:177] * Verifying Kubernetes components...
	I0701 14:16:34.092517 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.092557 3714493 addons.go:69] Setting volumesnapshots=true in profile "addons-929335"
	I0701 14:16:34.101619 3714493 addons.go:234] Setting addon ingress-dns=true in "addons-929335"
	I0701 14:16:34.101719 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.111422 3714493 addons.go:234] Setting addon inspektor-gadget=true in "addons-929335"
	I0701 14:16:34.113267 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.113889 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.134020 3714493 addons.go:234] Setting addon volumesnapshots=true in "addons-929335"
	I0701 14:16:34.134121 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.134668 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.150443 3714493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 14:16:34.156218 3714493 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0701 14:16:34.166506 3714493 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0701 14:16:34.166576 3714493 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0701 14:16:34.166691 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.177618 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.178279 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.217350 3714493 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0701 14:16:34.224625 3714493 out.go:177]   - Using image docker.io/registry:2.8.3
	I0701 14:16:34.226158 3714493 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0701 14:16:34.226323 3714493 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0701 14:16:34.226469 3714493 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0701 14:16:34.229994 3714493 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0701 14:16:34.230064 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0701 14:16:34.230160 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.247694 3714493 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0701 14:16:34.247757 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0701 14:16:34.247864 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.247989 3714493 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0701 14:16:34.248227 3714493 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0701 14:16:34.248239 3714493 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0701 14:16:34.248291 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.282251 3714493 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-929335"
	I0701 14:16:34.282341 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.282849 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.298698 3714493 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0701 14:16:34.299021 3714493 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0701 14:16:34.299039 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0701 14:16:34.299107 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.342239 3714493 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 14:16:34.344514 3714493 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 14:16:34.344539 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 14:16:34.344607 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	W0701 14:16:34.351627 3714493 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0701 14:16:34.353410 3714493 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0701 14:16:34.355393 3714493 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0701 14:16:34.355591 3714493 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0701 14:16:34.356120 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.361887 3714493 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0701 14:16:34.361905 3714493 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0701 14:16:34.361969 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.371335 3714493 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0701 14:16:34.373082 3714493 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0701 14:16:34.375153 3714493 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0701 14:16:34.376917 3714493 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0701 14:16:34.377987 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0701 14:16:34.385067 3714493 addons.go:234] Setting addon default-storageclass=true in "addons-929335"
	I0701 14:16:34.385110 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.385532 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.385732 3714493 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0701 14:16:34.385747 3714493 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0701 14:16:34.385801 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.400303 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.401154 3714493 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0701 14:16:34.404862 3714493 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0701 14:16:34.404930 3714493 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0701 14:16:34.405087 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.459254 3714493 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0701 14:16:34.461277 3714493 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0701 14:16:34.464649 3714493 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0701 14:16:34.467502 3714493 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0701 14:16:34.467678 3714493 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0701 14:16:34.467697 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0701 14:16:34.467765 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.472341 3714493 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0701 14:16:34.472363 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0701 14:16:34.472432 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.490919 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.510694 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.510816 3714493 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 14:16:34.522962 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.526792 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.569057 3714493 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0701 14:16:34.571412 3714493 out.go:177]   - Using image docker.io/busybox:stable
	I0701 14:16:34.576762 3714493 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0701 14:16:34.576781 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0701 14:16:34.576850 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.608784 3714493 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 14:16:34.608845 3714493 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 14:16:34.608938 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.619735 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.620166 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.624184 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.628534 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.644585 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.650554 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.674245 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.682288 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.880571 3714493 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0701 14:16:34.880595 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0701 14:16:34.906530 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0701 14:16:34.945534 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0701 14:16:35.049105 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0701 14:16:35.062378 3714493 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0701 14:16:35.062448 3714493 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0701 14:16:35.065120 3714493 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0701 14:16:35.065181 3714493 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0701 14:16:35.068346 3714493 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0701 14:16:35.068416 3714493 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0701 14:16:35.073416 3714493 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0701 14:16:35.073509 3714493 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0701 14:16:35.102107 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 14:16:35.107455 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0701 14:16:35.120817 3714493 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0701 14:16:35.120886 3714493 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0701 14:16:35.128086 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0701 14:16:35.194687 3714493 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0701 14:16:35.194758 3714493 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0701 14:16:35.198166 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 14:16:35.215393 3714493 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0701 14:16:35.215506 3714493 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0701 14:16:35.223034 3714493 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 14:16:35.223117 3714493 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0701 14:16:35.259603 3714493 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0701 14:16:35.259676 3714493 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0701 14:16:35.275817 3714493 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0701 14:16:35.276165 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0701 14:16:35.326307 3714493 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0701 14:16:35.326379 3714493 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0701 14:16:35.369462 3714493 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0701 14:16:35.369541 3714493 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0701 14:16:35.388923 3714493 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0701 14:16:35.389004 3714493 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0701 14:16:35.405282 3714493 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0701 14:16:35.405355 3714493 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0701 14:16:35.422319 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0701 14:16:35.447016 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 14:16:35.493806 3714493 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0701 14:16:35.493891 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0701 14:16:35.500764 3714493 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0701 14:16:35.500828 3714493 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0701 14:16:35.527822 3714493 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0701 14:16:35.527898 3714493 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0701 14:16:35.531613 3714493 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0701 14:16:35.531676 3714493 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0701 14:16:35.585433 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0701 14:16:35.635054 3714493 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0701 14:16:35.635124 3714493 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0701 14:16:35.640424 3714493 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0701 14:16:35.640494 3714493 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0701 14:16:35.699546 3714493 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0701 14:16:35.699624 3714493 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0701 14:16:35.789406 3714493 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0701 14:16:35.789468 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0701 14:16:35.847372 3714493 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0701 14:16:35.847445 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0701 14:16:35.857297 3714493 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0701 14:16:35.857359 3714493 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0701 14:16:35.930577 3714493 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0701 14:16:35.930650 3714493 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0701 14:16:35.959377 3714493 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0701 14:16:35.959451 3714493 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0701 14:16:36.017769 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0701 14:16:36.074533 3714493 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0701 14:16:36.074602 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0701 14:16:36.090570 3714493 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0701 14:16:36.090677 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0701 14:16:36.193282 3714493 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0701 14:16:36.193353 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0701 14:16:36.216757 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0701 14:16:36.290921 3714493 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0701 14:16:36.291000 3714493 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0701 14:16:36.311402 3714493 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.933386242s)
	I0701 14:16:36.311516 3714493 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0701 14:16:36.311486 3714493 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.800650825s)
	I0701 14:16:36.312409 3714493 node_ready.go:35] waiting up to 6m0s for node "addons-929335" to be "Ready" ...
	I0701 14:16:36.394770 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0701 14:16:37.883303 3714493 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-929335" context rescaled to 1 replicas
	I0701 14:16:38.413681 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.507112665s)
	I0701 14:16:38.413784 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.468224774s)
	I0701 14:16:38.481246 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:16:38.762505 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.713308551s)
	I0701 14:16:39.225078 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.12288642s)
	I0701 14:16:39.233206 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.125664976s)
	I0701 14:16:40.070713 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.942553861s)
	I0701 14:16:40.070800 3714493 addons.go:475] Verifying addon ingress=true in "addons-929335"
	I0701 14:16:40.071041 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.872807842s)
	I0701 14:16:40.071283 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.648889468s)
	I0701 14:16:40.071486 3714493 addons.go:475] Verifying addon registry=true in "addons-929335"
	I0701 14:16:40.071351 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.624259143s)
	I0701 14:16:40.071656 3714493 addons.go:475] Verifying addon metrics-server=true in "addons-929335"
	I0701 14:16:40.071384 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.48587525s)
	I0701 14:16:40.073146 3714493 out.go:177] * Verifying ingress addon...
	I0701 14:16:40.073246 3714493 out.go:177] * Verifying registry addon...
	I0701 14:16:40.074988 3714493 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-929335 service yakd-dashboard -n yakd-dashboard
	
	I0701 14:16:40.075864 3714493 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0701 14:16:40.076826 3714493 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0701 14:16:40.114717 3714493 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0701 14:16:40.114819 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:40.119951 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.102065283s)
	W0701 14:16:40.120011 3714493 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0701 14:16:40.120038 3714493 retry.go:31] will retry after 205.150726ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0701 14:16:40.120111 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.903280624s)
	I0701 14:16:40.121342 3714493 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0701 14:16:40.121358 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:40.325909 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0701 14:16:40.580377 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.185514377s)
	I0701 14:16:40.580478 3714493 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-929335"
	I0701 14:16:40.583735 3714493 out.go:177] * Verifying csi-hostpath-driver addon...
	I0701 14:16:40.586713 3714493 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0701 14:16:40.594882 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:40.609140 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:40.613911 3714493 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0701 14:16:40.613980 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:40.816367 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:16:41.080335 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:41.082034 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:41.091835 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:41.565486 3714493 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0701 14:16:41.565570 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:41.590563 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:41.601562 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:41.611170 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:41.611920 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:41.805436 3714493 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0701 14:16:41.841584 3714493 addons.go:234] Setting addon gcp-auth=true in "addons-929335"
	I0701 14:16:41.841685 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:41.842239 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:41.861355 3714493 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0701 14:16:41.861413 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:41.880323 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:42.087796 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:42.090774 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:42.097128 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:42.581050 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:42.581421 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:42.591739 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:42.818588 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:16:43.084361 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:43.085481 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:43.097287 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:43.258329 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.932315114s)
	I0701 14:16:43.258502 3714493 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.39711314s)
	I0701 14:16:43.260477 3714493 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0701 14:16:43.262235 3714493 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0701 14:16:43.263843 3714493 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0701 14:16:43.263898 3714493 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0701 14:16:43.289146 3714493 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0701 14:16:43.289215 3714493 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0701 14:16:43.308260 3714493 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0701 14:16:43.308331 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0701 14:16:43.332622 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0701 14:16:43.581225 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:43.583473 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:43.591619 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:44.112734 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:44.113967 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:44.137665 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:44.234862 3714493 addons.go:475] Verifying addon gcp-auth=true in "addons-929335"
	I0701 14:16:44.236796 3714493 out.go:177] * Verifying gcp-auth addon...
	I0701 14:16:44.239981 3714493 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0701 14:16:44.252868 3714493 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0701 14:16:44.252943 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:44.581786 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:44.583186 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:44.591571 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:44.744040 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:45.091760 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:45.092280 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:45.096570 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:45.244845 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:45.316121 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:16:45.580977 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:45.583462 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:45.592691 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:45.743455 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:46.081399 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:46.081769 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:46.097919 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:46.244357 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:46.581409 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:46.582440 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:46.591477 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:46.743506 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:47.082516 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:47.083590 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:47.091334 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:47.244489 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:47.579819 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:47.581891 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:47.590923 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:47.743710 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:47.816128 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:16:48.081822 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:48.082306 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:48.092660 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:48.244637 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:48.580657 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:48.581493 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:48.591772 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:48.744400 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:49.079804 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:49.081694 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:49.092515 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:49.243623 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:49.580958 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:49.581329 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:49.591195 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:49.744011 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:49.816434 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:16:50.082335 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:50.083139 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:50.091619 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:50.243526 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:50.581336 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:50.582087 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:50.592499 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:50.744123 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:51.082085 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:51.082565 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:51.091484 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:51.244335 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:51.580931 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:51.582249 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:51.591555 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:51.743315 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:52.080722 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:52.081149 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:52.091088 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:52.244361 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:52.316143 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:16:52.580180 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:52.580590 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:52.591134 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:52.743929 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:53.081743 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:53.082502 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:53.092003 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:53.244465 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:53.581198 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:53.581366 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:53.591498 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:53.743784 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:54.080401 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:54.080788 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:54.090966 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:54.244344 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:54.316346 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:16:54.579922 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:54.580626 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:54.591119 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:54.744014 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:55.080124 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:55.082492 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:55.092937 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:55.244606 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:55.581524 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:55.581905 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:55.591526 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:55.743944 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:56.080659 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:56.081736 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:56.091158 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:56.244323 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:56.580509 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:56.580824 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:56.591465 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:56.743982 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:56.816430 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:16:57.080545 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:57.080997 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:57.091857 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:57.243711 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:57.580859 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:57.581093 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:57.590824 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:57.744460 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:58.080569 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:58.082411 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:58.091207 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:58.245509 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:58.580574 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:58.581893 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:58.591628 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:58.744093 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:58.816533 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:16:59.080818 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:59.081423 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:59.091411 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:59.243970 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:59.581367 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:59.582123 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:59.591614 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:59.743472 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:00.105087 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:00.105319 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:00.114476 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:00.244951 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:00.581487 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:00.582108 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:00.591241 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:00.744250 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:01.081456 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:01.082717 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:01.090945 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:01.243772 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:01.315836 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:17:01.580993 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:01.582205 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:01.591016 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:01.744216 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:02.082352 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:02.083194 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:02.091988 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:02.245410 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:02.580393 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:02.582327 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:02.591133 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:02.744415 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:03.081584 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:03.083205 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:03.091034 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:03.244147 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:03.316176 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:17:03.581041 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:03.581899 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:03.591547 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:03.744129 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:04.086859 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:04.089289 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:04.093086 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:04.243977 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:04.581989 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:04.582827 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:04.591736 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:04.744259 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:05.081576 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:05.082352 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:05.094714 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:05.244684 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:05.317062 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:17:05.581387 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:05.582634 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:05.592027 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:05.743753 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:06.081280 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:06.082063 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:06.091294 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:06.244522 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:06.580817 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:06.581584 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:06.591150 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:06.745955 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:07.080430 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:07.082352 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:07.091494 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:07.243868 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:07.579829 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:07.581908 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:07.591673 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:07.743653 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:07.815919 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:17:08.121096 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:08.121891 3714493 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0701 14:17:08.121908 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:08.132177 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:08.249372 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:08.342301 3714493 node_ready.go:49] node "addons-929335" has status "Ready":"True"
	I0701 14:17:08.342330 3714493 node_ready.go:38] duration metric: took 32.029893573s for node "addons-929335" to be "Ready" ...
	I0701 14:17:08.342341 3714493 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 14:17:08.407878 3714493 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-s8jw9" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:08.584824 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:08.585208 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:08.591823 3714493 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0701 14:17:08.591892 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:08.747802 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:09.096665 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:09.097434 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:09.104098 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:09.243844 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:09.582201 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:09.586042 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:09.593329 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:09.743661 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:10.099148 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:10.111876 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:10.113382 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:10.244252 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:10.414173 3714493 pod_ready.go:102] pod "coredns-7db6d8ff4d-s8jw9" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:10.588255 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:10.589913 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:10.594007 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:10.745176 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:11.084905 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:11.089257 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:11.103084 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:11.244429 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:11.414405 3714493 pod_ready.go:92] pod "coredns-7db6d8ff4d-s8jw9" in "kube-system" namespace has status "Ready":"True"
	I0701 14:17:11.414474 3714493 pod_ready.go:81] duration metric: took 3.006514273s for pod "coredns-7db6d8ff4d-s8jw9" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.414514 3714493 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-929335" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.428148 3714493 pod_ready.go:92] pod "etcd-addons-929335" in "kube-system" namespace has status "Ready":"True"
	I0701 14:17:11.428217 3714493 pod_ready.go:81] duration metric: took 13.671197ms for pod "etcd-addons-929335" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.428246 3714493 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-929335" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.433167 3714493 pod_ready.go:92] pod "kube-apiserver-addons-929335" in "kube-system" namespace has status "Ready":"True"
	I0701 14:17:11.433237 3714493 pod_ready.go:81] duration metric: took 4.969126ms for pod "kube-apiserver-addons-929335" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.433263 3714493 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-929335" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.446230 3714493 pod_ready.go:92] pod "kube-controller-manager-addons-929335" in "kube-system" namespace has status "Ready":"True"
	I0701 14:17:11.446302 3714493 pod_ready.go:81] duration metric: took 13.017643ms for pod "kube-controller-manager-addons-929335" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.446333 3714493 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b7sh5" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.459641 3714493 pod_ready.go:92] pod "kube-proxy-b7sh5" in "kube-system" namespace has status "Ready":"True"
	I0701 14:17:11.459722 3714493 pod_ready.go:81] duration metric: took 13.367964ms for pod "kube-proxy-b7sh5" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.459749 3714493 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-929335" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.585639 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:11.591679 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:11.616799 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:11.747435 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:11.812343 3714493 pod_ready.go:92] pod "kube-scheduler-addons-929335" in "kube-system" namespace has status "Ready":"True"
	I0701 14:17:11.812369 3714493 pod_ready.go:81] duration metric: took 352.598979ms for pod "kube-scheduler-addons-929335" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.812389 3714493 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:12.086056 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:12.087355 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:12.094836 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:12.243945 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:12.583836 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:12.594583 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:12.603588 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:12.743704 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:13.084219 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:13.098737 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:13.100774 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:13.244223 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:13.583124 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:13.583751 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:13.592603 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:13.744082 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:13.824263 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:14.084580 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:14.086741 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:14.093056 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:14.244983 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:14.582155 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:14.582696 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:14.592536 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:14.743665 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:15.081735 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:15.082950 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:15.094604 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:15.245086 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:15.584316 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:15.586002 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:15.592428 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:15.747106 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:16.084605 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:16.085955 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:16.093500 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:16.245783 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:16.320486 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:16.595919 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:16.596344 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:16.605544 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:16.744506 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:17.082304 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:17.082744 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:17.093918 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:17.243917 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:17.581963 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:17.582581 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:17.592845 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:17.743059 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:18.082351 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:18.083249 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:18.093350 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:18.244265 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:18.581308 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:18.586706 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:18.593641 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:18.744406 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:18.818252 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:19.084509 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:19.086426 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:19.093603 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:19.247175 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:19.579923 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:19.582723 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:19.592470 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:19.752591 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:20.082363 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:20.083528 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:20.092461 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:20.243746 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:20.580418 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:20.582399 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:20.591564 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:20.744811 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:20.818777 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:21.089122 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:21.092473 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:21.104228 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:21.250201 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:21.585518 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:21.589157 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:21.595584 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:21.744853 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:22.081055 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:22.085304 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:22.097664 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:22.246094 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:22.582647 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:22.590030 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:22.596549 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:22.744462 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:22.819648 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:23.084611 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:23.086186 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:23.092796 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:23.244775 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:23.584771 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:23.586538 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:23.593752 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:23.744345 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:24.083918 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:24.086065 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:24.100601 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:24.244313 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:24.585008 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:24.588076 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:24.603405 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:24.743971 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:25.084738 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:25.089390 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:25.108787 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:25.249773 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:25.320530 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:25.582275 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:25.582948 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:25.593324 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:25.743834 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:26.081822 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:26.083483 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:26.091958 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:26.246513 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:26.581070 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:26.584358 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:26.592073 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:26.744060 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:27.083933 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:27.087574 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:27.098038 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:27.244736 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:27.326496 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:27.584147 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:27.585761 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:27.595737 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:27.745504 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:28.080450 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:28.083180 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:28.093785 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:28.252622 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:28.581367 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:28.584959 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:28.592453 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:28.743215 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:29.083390 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:29.084203 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:29.093746 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:29.243930 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:29.582119 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:29.583080 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:29.592096 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:29.745654 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:29.818848 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:30.080917 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:30.083572 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:30.092522 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:30.268887 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:30.585177 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:30.586384 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:30.594151 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:30.744094 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:31.082249 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:31.083768 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:31.094398 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:31.247133 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:31.580461 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:31.581652 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:31.593768 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:31.744234 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:31.820932 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:32.084070 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:32.085545 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:32.101092 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:32.254086 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:32.582906 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:32.592488 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:32.600965 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:32.744548 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:33.081519 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:33.086743 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:33.115393 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:33.274825 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:33.609454 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:33.622999 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:33.631870 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:33.744724 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:33.823173 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:34.088408 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:34.095624 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:34.100334 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:34.244279 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:34.581965 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:34.583004 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:34.592727 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:34.744535 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:35.083196 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:35.084170 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:35.094791 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:35.245967 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:35.581436 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:35.581628 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:35.592871 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:35.743886 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:36.083365 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:36.084925 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:36.093416 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:36.244676 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:36.320502 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:36.583626 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:36.584210 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:36.592820 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:36.747418 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:37.080565 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:37.081780 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:37.092547 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:37.244560 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:37.582541 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:37.583847 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:37.592458 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:37.743742 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:38.081799 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:38.083404 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:38.093611 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:38.243966 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:38.582447 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:38.583327 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:38.592080 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:38.744494 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:38.819616 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:39.083755 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:39.085354 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:39.092742 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:39.258682 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:39.588577 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:39.590657 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:39.599236 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:39.744540 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:40.082499 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:40.085942 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:40.092602 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:40.245146 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:40.586288 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:40.588438 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:40.593104 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:40.743713 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:40.819721 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:41.086088 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:41.090200 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:41.100932 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:41.244962 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:41.586086 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:41.587090 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:41.598139 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:41.744196 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:42.095449 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:42.096226 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:42.116461 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:42.244514 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:42.583737 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:42.585299 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:42.600036 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:42.744055 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:42.826791 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:43.083077 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:43.084537 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:43.092305 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:43.244242 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:43.580998 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:43.585488 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:43.592383 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:43.743836 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:44.082834 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:44.085650 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:44.096622 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:44.250671 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:44.584404 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:44.584838 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:44.593505 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:44.757588 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:44.833972 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:45.081380 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:45.083888 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:45.096816 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:45.248338 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:45.581977 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:45.582421 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:45.593981 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:45.743497 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:46.082952 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:46.084384 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:46.093943 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:46.244309 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:46.594520 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:46.596581 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:46.606991 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:46.744674 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:47.089882 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:47.091515 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:47.104966 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:47.246405 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:47.320564 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:47.586580 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:47.587992 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:47.604259 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:47.743954 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:48.084946 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:48.086374 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:48.097192 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:48.245240 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:48.584910 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:48.591451 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:48.596175 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:48.750309 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:49.083471 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:49.091054 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:49.097179 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:49.245629 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:49.321272 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:49.583149 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:49.586437 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:49.592746 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:49.744535 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:50.083448 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:50.085346 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:50.093173 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:50.244371 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:50.581890 3714493 kapi.go:107] duration metric: took 1m10.505061287s to wait for kubernetes.io/minikube-addons=registry ...
	I0701 14:17:50.582832 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:50.593459 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:50.744140 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:51.081107 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:51.093390 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:51.243966 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:51.582414 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:51.596552 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:51.746017 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:51.820171 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:52.081575 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:52.093544 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:52.249040 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:52.582441 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:52.594112 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:52.744184 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:53.082226 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:53.093338 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:53.245050 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:53.580913 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:53.592755 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:53.746826 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:53.820702 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:54.082705 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:54.093276 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:54.244633 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:54.583747 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:54.595896 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:54.743583 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:55.080939 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:55.097199 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:55.243998 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:55.581310 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:55.593573 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:55.744617 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:56.081311 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:56.094378 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:56.243865 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:56.324735 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:56.592198 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:56.601905 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:56.743567 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:57.080762 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:57.092818 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:57.248322 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:57.581259 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:57.608831 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:57.743883 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:58.081617 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:58.094270 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:58.244507 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:58.581300 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:58.595031 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:58.743694 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:58.819122 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:59.081311 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:59.134720 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:59.258056 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:59.580976 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:59.600737 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:59.744966 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:00.111941 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:18:00.115667 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:00.255042 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:00.580944 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:18:00.593422 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:00.744199 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:00.819705 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:01.081335 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:18:01.094196 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:01.244659 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:01.582206 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:18:01.608169 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:01.744741 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:02.080555 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:18:02.093951 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:02.243992 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:02.582627 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:18:02.596738 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:02.745300 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:03.082919 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:18:03.095360 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:03.244087 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:03.322640 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:03.580032 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:18:03.592316 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:03.743526 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:04.081594 3714493 kapi.go:107] duration metric: took 1m24.005729921s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0701 14:18:04.092100 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:04.245297 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:04.593062 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:04.743777 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:05.102037 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:05.244310 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:05.324952 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:05.593130 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:05.755127 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:06.093794 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:06.244871 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:06.593080 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:06.744220 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:07.092525 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:07.244322 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:07.592808 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:07.743942 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:07.818673 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:08.093951 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:08.243751 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:08.592755 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:08.744811 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:09.093112 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:09.245337 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:09.597137 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:09.744422 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:09.819101 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:10.093387 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:10.244745 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:10.592596 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:10.746558 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:11.092749 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:11.251166 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:11.594323 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:11.746441 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:11.824825 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:12.096102 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:12.247897 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:12.592138 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:12.744277 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:13.092556 3714493 kapi.go:107] duration metric: took 1m32.505840958s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0701 14:18:13.244077 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:13.745512 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:13.842973 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:14.244527 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:14.744960 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:15.244502 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:15.764548 3714493 kapi.go:107] duration metric: took 1m31.524566461s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0701 14:18:15.766936 3714493 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-929335 cluster.
	I0701 14:18:15.768964 3714493 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0701 14:18:15.770854 3714493 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0701 14:18:15.772686 3714493 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, inspektor-gadget, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0701 14:18:15.774607 3714493 addons.go:510] duration metric: took 1m41.755576625s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner storage-provisioner-rancher metrics-server yakd inspektor-gadget default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0701 14:18:16.318256 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:18.318365 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:20.818435 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:23.319002 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:25.819058 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:28.319549 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:28.818929 3714493 pod_ready.go:92] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"True"
	I0701 14:18:28.818960 3714493 pod_ready.go:81] duration metric: took 1m17.006561887s for pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace to be "Ready" ...
	I0701 14:18:28.818972 3714493 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-ssxlb" in "kube-system" namespace to be "Ready" ...
	I0701 14:18:28.824206 3714493 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-ssxlb" in "kube-system" namespace has status "Ready":"True"
	I0701 14:18:28.824276 3714493 pod_ready.go:81] duration metric: took 5.29493ms for pod "nvidia-device-plugin-daemonset-ssxlb" in "kube-system" namespace to be "Ready" ...
	I0701 14:18:28.824322 3714493 pod_ready.go:38] duration metric: took 1m20.48196799s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 14:18:28.824355 3714493 api_server.go:52] waiting for apiserver process to appear ...
	I0701 14:18:28.824385 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0701 14:18:28.824460 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 14:18:28.878403 3714493 cri.go:89] found id: "a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657"
	I0701 14:18:28.878422 3714493 cri.go:89] found id: ""
	I0701 14:18:28.878430 3714493 logs.go:276] 1 containers: [a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657]
	I0701 14:18:28.878485 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:28.882652 3714493 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0701 14:18:28.882726 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 14:18:28.921166 3714493 cri.go:89] found id: "a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025"
	I0701 14:18:28.921189 3714493 cri.go:89] found id: ""
	I0701 14:18:28.921197 3714493 logs.go:276] 1 containers: [a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025]
	I0701 14:18:28.921269 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:28.924643 3714493 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0701 14:18:28.924731 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 14:18:28.963683 3714493 cri.go:89] found id: "c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a"
	I0701 14:18:28.963711 3714493 cri.go:89] found id: ""
	I0701 14:18:28.963720 3714493 logs.go:276] 1 containers: [c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a]
	I0701 14:18:28.963775 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:28.967164 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0701 14:18:28.967252 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 14:18:29.007531 3714493 cri.go:89] found id: "f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8"
	I0701 14:18:29.007556 3714493 cri.go:89] found id: ""
	I0701 14:18:29.007564 3714493 logs.go:276] 1 containers: [f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8]
	I0701 14:18:29.007629 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:29.011292 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0701 14:18:29.011365 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 14:18:29.052545 3714493 cri.go:89] found id: "dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba"
	I0701 14:18:29.052569 3714493 cri.go:89] found id: ""
	I0701 14:18:29.052577 3714493 logs.go:276] 1 containers: [dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba]
	I0701 14:18:29.052635 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:29.056445 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 14:18:29.056519 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 14:18:29.095182 3714493 cri.go:89] found id: "646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393"
	I0701 14:18:29.095204 3714493 cri.go:89] found id: ""
	I0701 14:18:29.095212 3714493 logs.go:276] 1 containers: [646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393]
	I0701 14:18:29.095270 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:29.098928 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0701 14:18:29.099008 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0701 14:18:29.137707 3714493 cri.go:89] found id: "db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3"
	I0701 14:18:29.137731 3714493 cri.go:89] found id: ""
	I0701 14:18:29.137739 3714493 logs.go:276] 1 containers: [db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3]
	I0701 14:18:29.137794 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:29.141319 3714493 logs.go:123] Gathering logs for kube-controller-manager [646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393] ...
	I0701 14:18:29.141345 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393"
	I0701 14:18:29.223402 3714493 logs.go:123] Gathering logs for CRI-O ...
	I0701 14:18:29.223439 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0701 14:18:29.318656 3714493 logs.go:123] Gathering logs for container status ...
	I0701 14:18:29.318689 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 14:18:29.380137 3714493 logs.go:123] Gathering logs for kubelet ...
	I0701 14:18:29.380172 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 14:18:29.432200 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: W0701 14:16:33.791186    1552 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:29.432422 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: E0701 14:16:33.791243    1552 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:29.432955 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: W0701 14:16:33.815611    1552 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:29.433163 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: E0701 14:16:33.815653    1552 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:29.444763 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.068671    1552 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:29.444988 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.068720    1552 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:29.445463 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.071459    1552 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:29.445656 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.071498    1552 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:29.445821 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.077204    1552 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:29.446005 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.077471    1552 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	I0701 14:18:29.490072 3714493 logs.go:123] Gathering logs for dmesg ...
	I0701 14:18:29.490111 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 14:18:29.509840 3714493 logs.go:123] Gathering logs for kube-apiserver [a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657] ...
	I0701 14:18:29.509870 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657"
	I0701 14:18:29.573738 3714493 logs.go:123] Gathering logs for kube-scheduler [f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8] ...
	I0701 14:18:29.573775 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8"
	I0701 14:18:29.620326 3714493 logs.go:123] Gathering logs for kube-proxy [dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba] ...
	I0701 14:18:29.620359 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba"
	I0701 14:18:29.663688 3714493 logs.go:123] Gathering logs for describe nodes ...
	I0701 14:18:29.663718 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 14:18:29.845662 3714493 logs.go:123] Gathering logs for etcd [a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025] ...
	I0701 14:18:29.845698 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025"
	I0701 14:18:29.899201 3714493 logs.go:123] Gathering logs for coredns [c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a] ...
	I0701 14:18:29.899387 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a"
	I0701 14:18:29.946145 3714493 logs.go:123] Gathering logs for kindnet [db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3] ...
	I0701 14:18:29.946174 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3"
	I0701 14:18:29.999964 3714493 out.go:304] Setting ErrFile to fd 2...
	I0701 14:18:29.999988 3714493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0701 14:18:30.000037 3714493 out.go:239] X Problems detected in kubelet:
	W0701 14:18:30.000046 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.068720    1552 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:30.000053 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.071459    1552 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:30.000059 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.071498    1552 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:30.000067 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.077204    1552 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:30.000073 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.077471    1552 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	I0701 14:18:30.000086 3714493 out.go:304] Setting ErrFile to fd 2...
	I0701 14:18:30.000091 3714493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:18:40.003404 3714493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 14:18:40.023370 3714493 api_server.go:72] duration metric: took 2m6.004737105s to wait for apiserver process to appear ...
	I0701 14:18:40.023399 3714493 api_server.go:88] waiting for apiserver healthz status ...
	I0701 14:18:40.023437 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0701 14:18:40.023501 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 14:18:40.071114 3714493 cri.go:89] found id: "a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657"
	I0701 14:18:40.071141 3714493 cri.go:89] found id: ""
	I0701 14:18:40.071149 3714493 logs.go:276] 1 containers: [a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657]
	I0701 14:18:40.071207 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:40.074805 3714493 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0701 14:18:40.074884 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 14:18:40.118691 3714493 cri.go:89] found id: "a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025"
	I0701 14:18:40.118714 3714493 cri.go:89] found id: ""
	I0701 14:18:40.118722 3714493 logs.go:276] 1 containers: [a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025]
	I0701 14:18:40.118778 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:40.123744 3714493 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0701 14:18:40.123820 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 14:18:40.166954 3714493 cri.go:89] found id: "c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a"
	I0701 14:18:40.166979 3714493 cri.go:89] found id: ""
	I0701 14:18:40.166987 3714493 logs.go:276] 1 containers: [c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a]
	I0701 14:18:40.167047 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:40.171043 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0701 14:18:40.171114 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 14:18:40.215720 3714493 cri.go:89] found id: "f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8"
	I0701 14:18:40.215744 3714493 cri.go:89] found id: ""
	I0701 14:18:40.215752 3714493 logs.go:276] 1 containers: [f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8]
	I0701 14:18:40.215812 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:40.219836 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0701 14:18:40.219910 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 14:18:40.261304 3714493 cri.go:89] found id: "dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba"
	I0701 14:18:40.261327 3714493 cri.go:89] found id: ""
	I0701 14:18:40.261335 3714493 logs.go:276] 1 containers: [dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba]
	I0701 14:18:40.261392 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:40.265186 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 14:18:40.265259 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 14:18:40.307433 3714493 cri.go:89] found id: "646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393"
	I0701 14:18:40.307457 3714493 cri.go:89] found id: ""
	I0701 14:18:40.307479 3714493 logs.go:276] 1 containers: [646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393]
	I0701 14:18:40.307536 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:40.310987 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0701 14:18:40.311060 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0701 14:18:40.349166 3714493 cri.go:89] found id: "db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3"
	I0701 14:18:40.349191 3714493 cri.go:89] found id: ""
	I0701 14:18:40.349198 3714493 logs.go:276] 1 containers: [db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3]
	I0701 14:18:40.349255 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:40.352947 3714493 logs.go:123] Gathering logs for kubelet ...
	I0701 14:18:40.352972 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 14:18:40.397772 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: W0701 14:16:33.791186    1552 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:40.398011 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: E0701 14:16:33.791243    1552 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:40.398700 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: W0701 14:16:33.815611    1552 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:40.398925 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: E0701 14:16:33.815653    1552 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:40.410767 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.068671    1552 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:40.411019 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.068720    1552 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:40.411501 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.071459    1552 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:40.411692 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.071498    1552 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:40.411858 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.077204    1552 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:40.412052 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.077471    1552 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	I0701 14:18:40.460267 3714493 logs.go:123] Gathering logs for describe nodes ...
	I0701 14:18:40.460318 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 14:18:40.627707 3714493 logs.go:123] Gathering logs for kube-scheduler [f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8] ...
	I0701 14:18:40.627740 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8"
	I0701 14:18:40.682164 3714493 logs.go:123] Gathering logs for kube-controller-manager [646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393] ...
	I0701 14:18:40.682195 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393"
	I0701 14:18:40.772273 3714493 logs.go:123] Gathering logs for container status ...
	I0701 14:18:40.772304 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 14:18:40.842797 3714493 logs.go:123] Gathering logs for kindnet [db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3] ...
	I0701 14:18:40.842828 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3"
	I0701 14:18:40.886545 3714493 logs.go:123] Gathering logs for CRI-O ...
	I0701 14:18:40.886582 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0701 14:18:40.995636 3714493 logs.go:123] Gathering logs for dmesg ...
	I0701 14:18:40.995681 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 14:18:41.016425 3714493 logs.go:123] Gathering logs for kube-apiserver [a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657] ...
	I0701 14:18:41.016462 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657"
	I0701 14:18:41.071208 3714493 logs.go:123] Gathering logs for etcd [a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025] ...
	I0701 14:18:41.071238 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025"
	I0701 14:18:41.122978 3714493 logs.go:123] Gathering logs for coredns [c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a] ...
	I0701 14:18:41.123008 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a"
	I0701 14:18:41.164189 3714493 logs.go:123] Gathering logs for kube-proxy [dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba] ...
	I0701 14:18:41.164223 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba"
	I0701 14:18:41.207147 3714493 out.go:304] Setting ErrFile to fd 2...
	I0701 14:18:41.207170 3714493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0701 14:18:41.207218 3714493 out.go:239] X Problems detected in kubelet:
	W0701 14:18:41.207233 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.068720    1552 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:41.207240 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.071459    1552 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:41.207256 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.071498    1552 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:41.207264 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.077204    1552 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:41.207277 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.077471    1552 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	I0701 14:18:41.207283 3714493 out.go:304] Setting ErrFile to fd 2...
	I0701 14:18:41.207289 3714493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:18:51.208757 3714493 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:18:51.216274 3714493 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0701 14:18:51.217497 3714493 api_server.go:141] control plane version: v1.30.2
	I0701 14:18:51.217526 3714493 api_server.go:131] duration metric: took 11.194120422s to wait for apiserver health ...
	I0701 14:18:51.217535 3714493 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 14:18:51.217558 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0701 14:18:51.217627 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 14:18:51.262838 3714493 cri.go:89] found id: "a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657"
	I0701 14:18:51.262868 3714493 cri.go:89] found id: ""
	I0701 14:18:51.262876 3714493 logs.go:276] 1 containers: [a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657]
	I0701 14:18:51.262934 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:51.266571 3714493 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0701 14:18:51.266649 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 14:18:51.313334 3714493 cri.go:89] found id: "a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025"
	I0701 14:18:51.313358 3714493 cri.go:89] found id: ""
	I0701 14:18:51.313366 3714493 logs.go:276] 1 containers: [a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025]
	I0701 14:18:51.313421 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:51.317854 3714493 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0701 14:18:51.317927 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 14:18:51.359342 3714493 cri.go:89] found id: "c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a"
	I0701 14:18:51.359362 3714493 cri.go:89] found id: ""
	I0701 14:18:51.359370 3714493 logs.go:276] 1 containers: [c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a]
	I0701 14:18:51.359425 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:51.363207 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0701 14:18:51.363282 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 14:18:51.407610 3714493 cri.go:89] found id: "f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8"
	I0701 14:18:51.407633 3714493 cri.go:89] found id: ""
	I0701 14:18:51.407640 3714493 logs.go:276] 1 containers: [f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8]
	I0701 14:18:51.407722 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:51.411297 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0701 14:18:51.411396 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 14:18:51.453297 3714493 cri.go:89] found id: "dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba"
	I0701 14:18:51.453364 3714493 cri.go:89] found id: ""
	I0701 14:18:51.453386 3714493 logs.go:276] 1 containers: [dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba]
	I0701 14:18:51.453476 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:51.457279 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 14:18:51.457393 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 14:18:51.505841 3714493 cri.go:89] found id: "646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393"
	I0701 14:18:51.505864 3714493 cri.go:89] found id: ""
	I0701 14:18:51.505872 3714493 logs.go:276] 1 containers: [646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393]
	I0701 14:18:51.505944 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:51.509555 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0701 14:18:51.509641 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0701 14:18:51.553689 3714493 cri.go:89] found id: "db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3"
	I0701 14:18:51.553753 3714493 cri.go:89] found id: ""
	I0701 14:18:51.553775 3714493 logs.go:276] 1 containers: [db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3]
	I0701 14:18:51.553861 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:51.557461 3714493 logs.go:123] Gathering logs for kubelet ...
	I0701 14:18:51.557545 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 14:18:51.598201 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: W0701 14:16:33.791186    1552 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:51.598453 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: E0701 14:16:33.791243    1552 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:51.599003 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: W0701 14:16:33.815611    1552 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:51.599206 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: E0701 14:16:33.815653    1552 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:51.610045 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.068671    1552 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:51.610253 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.068720    1552 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:51.610715 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.071459    1552 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:51.610908 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.071498    1552 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:51.611074 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.077204    1552 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:51.611264 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.077471    1552 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	I0701 14:18:51.656841 3714493 logs.go:123] Gathering logs for dmesg ...
	I0701 14:18:51.656865 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 14:18:51.676175 3714493 logs.go:123] Gathering logs for etcd [a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025] ...
	I0701 14:18:51.676204 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025"
	I0701 14:18:51.733448 3714493 logs.go:123] Gathering logs for kube-scheduler [f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8] ...
	I0701 14:18:51.733480 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8"
	I0701 14:18:51.780486 3714493 logs.go:123] Gathering logs for kube-controller-manager [646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393] ...
	I0701 14:18:51.780516 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393"
	I0701 14:18:51.852657 3714493 logs.go:123] Gathering logs for CRI-O ...
	I0701 14:18:51.852773 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0701 14:18:51.943875 3714493 logs.go:123] Gathering logs for describe nodes ...
	I0701 14:18:51.943911 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 14:18:52.078224 3714493 logs.go:123] Gathering logs for kube-apiserver [a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657] ...
	I0701 14:18:52.078254 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657"
	I0701 14:18:52.141339 3714493 logs.go:123] Gathering logs for coredns [c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a] ...
	I0701 14:18:52.141370 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a"
	I0701 14:18:52.180565 3714493 logs.go:123] Gathering logs for kube-proxy [dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba] ...
	I0701 14:18:52.180595 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba"
	I0701 14:18:52.242104 3714493 logs.go:123] Gathering logs for kindnet [db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3] ...
	I0701 14:18:52.242136 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3"
	I0701 14:18:52.280088 3714493 logs.go:123] Gathering logs for container status ...
	I0701 14:18:52.280116 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 14:18:52.341462 3714493 out.go:304] Setting ErrFile to fd 2...
	I0701 14:18:52.341490 3714493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0701 14:18:52.341544 3714493 out.go:239] X Problems detected in kubelet:
	W0701 14:18:52.341553 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.068720    1552 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:52.341562 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.071459    1552 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:52.341575 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.071498    1552 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:52.341583 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.077204    1552 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:52.341591 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.077471    1552 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	I0701 14:18:52.341597 3714493 out.go:304] Setting ErrFile to fd 2...
	I0701 14:18:52.341608 3714493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:19:02.361276 3714493 system_pods.go:59] 18 kube-system pods found
	I0701 14:19:02.361314 3714493 system_pods.go:61] "coredns-7db6d8ff4d-s8jw9" [7ec40280-c5a3-4403-8f98-39eaa3f29e2c] Running
	I0701 14:19:02.361327 3714493 system_pods.go:61] "csi-hostpath-attacher-0" [2c927ce0-3ecd-4174-94fe-3e73008a24eb] Running
	I0701 14:19:02.361333 3714493 system_pods.go:61] "csi-hostpath-resizer-0" [24cb89c7-79cc-4c66-8046-84cf4c819fd4] Running
	I0701 14:19:02.361338 3714493 system_pods.go:61] "csi-hostpathplugin-mcv65" [4ad794ec-8d44-48b4-94fd-ab0605d8f2b1] Running
	I0701 14:19:02.361342 3714493 system_pods.go:61] "etcd-addons-929335" [0664c0af-c270-4fcb-8bb4-cc76248cf3ea] Running
	I0701 14:19:02.361346 3714493 system_pods.go:61] "kindnet-nzscv" [9aec9a7c-149e-4bec-b5c3-0524417a5272] Running
	I0701 14:19:02.361351 3714493 system_pods.go:61] "kube-apiserver-addons-929335" [af03cc27-972b-4106-b44b-de7d69eab5a6] Running
	I0701 14:19:02.361359 3714493 system_pods.go:61] "kube-controller-manager-addons-929335" [efe0db8d-b63f-4b20-b0b5-6e1036c91627] Running
	I0701 14:19:02.361381 3714493 system_pods.go:61] "kube-ingress-dns-minikube" [25af24ab-7674-4c32-b452-00053e068d4c] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0701 14:19:02.361399 3714493 system_pods.go:61] "kube-proxy-b7sh5" [0ae5c8da-8e2e-4513-9c4a-058705a64586] Running
	I0701 14:19:02.361405 3714493 system_pods.go:61] "kube-scheduler-addons-929335" [12031591-d005-45c2-8097-53a52d94b85b] Running
	I0701 14:19:02.361409 3714493 system_pods.go:61] "metrics-server-c59844bb4-7ddxq" [d044ed9e-3f07-4293-b20a-7710385bba17] Running
	I0701 14:19:02.361417 3714493 system_pods.go:61] "nvidia-device-plugin-daemonset-ssxlb" [07a73834-f2a1-49e5-ae9a-e15bee08c8ab] Running
	I0701 14:19:02.361425 3714493 system_pods.go:61] "registry-bnzqk" [710fb3bb-d2cb-4fb1-a706-25569704842a] Running
	I0701 14:19:02.361428 3714493 system_pods.go:61] "registry-proxy-cwtgh" [d522d504-68de-46ed-a686-4cb3f3054752] Running
	I0701 14:19:02.361432 3714493 system_pods.go:61] "snapshot-controller-745499f584-44clr" [46fcf348-5b93-443d-ad6c-9460a5abac66] Running
	I0701 14:19:02.361436 3714493 system_pods.go:61] "snapshot-controller-745499f584-f9c4l" [5d61a185-32fe-4b26-adf3-25413d4c354d] Running
	I0701 14:19:02.361444 3714493 system_pods.go:61] "storage-provisioner" [336d511c-48f8-41ab-9e80-73414eb12f55] Running
	I0701 14:19:02.361450 3714493 system_pods.go:74] duration metric: took 11.143908631s to wait for pod list to return data ...
	I0701 14:19:02.361465 3714493 default_sa.go:34] waiting for default service account to be created ...
	I0701 14:19:02.372349 3714493 default_sa.go:45] found service account: "default"
	I0701 14:19:02.372379 3714493 default_sa.go:55] duration metric: took 10.901289ms for default service account to be created ...
	I0701 14:19:02.372390 3714493 system_pods.go:116] waiting for k8s-apps to be running ...
	I0701 14:19:02.382702 3714493 system_pods.go:86] 18 kube-system pods found
	I0701 14:19:02.382740 3714493 system_pods.go:89] "coredns-7db6d8ff4d-s8jw9" [7ec40280-c5a3-4403-8f98-39eaa3f29e2c] Running
	I0701 14:19:02.382748 3714493 system_pods.go:89] "csi-hostpath-attacher-0" [2c927ce0-3ecd-4174-94fe-3e73008a24eb] Running
	I0701 14:19:02.382753 3714493 system_pods.go:89] "csi-hostpath-resizer-0" [24cb89c7-79cc-4c66-8046-84cf4c819fd4] Running
	I0701 14:19:02.382758 3714493 system_pods.go:89] "csi-hostpathplugin-mcv65" [4ad794ec-8d44-48b4-94fd-ab0605d8f2b1] Running
	I0701 14:19:02.382762 3714493 system_pods.go:89] "etcd-addons-929335" [0664c0af-c270-4fcb-8bb4-cc76248cf3ea] Running
	I0701 14:19:02.382767 3714493 system_pods.go:89] "kindnet-nzscv" [9aec9a7c-149e-4bec-b5c3-0524417a5272] Running
	I0701 14:19:02.382771 3714493 system_pods.go:89] "kube-apiserver-addons-929335" [af03cc27-972b-4106-b44b-de7d69eab5a6] Running
	I0701 14:19:02.382775 3714493 system_pods.go:89] "kube-controller-manager-addons-929335" [efe0db8d-b63f-4b20-b0b5-6e1036c91627] Running
	I0701 14:19:02.382785 3714493 system_pods.go:89] "kube-ingress-dns-minikube" [25af24ab-7674-4c32-b452-00053e068d4c] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0701 14:19:02.382791 3714493 system_pods.go:89] "kube-proxy-b7sh5" [0ae5c8da-8e2e-4513-9c4a-058705a64586] Running
	I0701 14:19:02.382799 3714493 system_pods.go:89] "kube-scheduler-addons-929335" [12031591-d005-45c2-8097-53a52d94b85b] Running
	I0701 14:19:02.382803 3714493 system_pods.go:89] "metrics-server-c59844bb4-7ddxq" [d044ed9e-3f07-4293-b20a-7710385bba17] Running
	I0701 14:19:02.382807 3714493 system_pods.go:89] "nvidia-device-plugin-daemonset-ssxlb" [07a73834-f2a1-49e5-ae9a-e15bee08c8ab] Running
	I0701 14:19:02.382811 3714493 system_pods.go:89] "registry-bnzqk" [710fb3bb-d2cb-4fb1-a706-25569704842a] Running
	I0701 14:19:02.382817 3714493 system_pods.go:89] "registry-proxy-cwtgh" [d522d504-68de-46ed-a686-4cb3f3054752] Running
	I0701 14:19:02.382822 3714493 system_pods.go:89] "snapshot-controller-745499f584-44clr" [46fcf348-5b93-443d-ad6c-9460a5abac66] Running
	I0701 14:19:02.382829 3714493 system_pods.go:89] "snapshot-controller-745499f584-f9c4l" [5d61a185-32fe-4b26-adf3-25413d4c354d] Running
	I0701 14:19:02.382833 3714493 system_pods.go:89] "storage-provisioner" [336d511c-48f8-41ab-9e80-73414eb12f55] Running
	I0701 14:19:02.382840 3714493 system_pods.go:126] duration metric: took 10.445374ms to wait for k8s-apps to be running ...
	I0701 14:19:02.382853 3714493 system_svc.go:44] waiting for kubelet service to be running ....
	I0701 14:19:02.382917 3714493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 14:19:02.395275 3714493 system_svc.go:56] duration metric: took 12.412197ms WaitForService to wait for kubelet
	I0701 14:19:02.395315 3714493 kubeadm.go:576] duration metric: took 2m28.376687186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 14:19:02.395336 3714493 node_conditions.go:102] verifying NodePressure condition ...
	I0701 14:19:02.399242 3714493 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0701 14:19:02.399276 3714493 node_conditions.go:123] node cpu capacity is 2
	I0701 14:19:02.399289 3714493 node_conditions.go:105] duration metric: took 3.948028ms to run NodePressure ...
	I0701 14:19:02.399302 3714493 start.go:240] waiting for startup goroutines ...
	I0701 14:19:02.399310 3714493 start.go:245] waiting for cluster config update ...
	I0701 14:19:02.399326 3714493 start.go:254] writing updated cluster config ...
	I0701 14:19:02.399630 3714493 ssh_runner.go:195] Run: rm -f paused
	I0701 14:19:02.737108 3714493 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0701 14:19:02.739247 3714493 out.go:177] * Done! kubectl is now configured to use "addons-929335" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 01 14:23:40 addons-929335 crio[962]: time="2024-07-01 14:23:40.695314266Z" level=info msg="Creating container: default/hello-world-app-86c47465fc-t4bld/hello-world-app" id=6c96907b-5d9a-4ab1-88db-dc2b2d061047 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 01 14:23:40 addons-929335 crio[962]: time="2024-07-01 14:23:40.695409865Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 01 14:23:40 addons-929335 crio[962]: time="2024-07-01 14:23:40.749534940Z" level=info msg="Removing container: c012c866d3d0c59042b2cc7465b61255f4f049c6aada67c2221dfed9736cf044" id=9dc85750-5c66-4c65-8c05-52cecf15137b name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 01 14:23:40 addons-929335 crio[962]: time="2024-07-01 14:23:40.772978570Z" level=info msg="Removed container c012c866d3d0c59042b2cc7465b61255f4f049c6aada67c2221dfed9736cf044: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=9dc85750-5c66-4c65-8c05-52cecf15137b name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 01 14:23:40 addons-929335 crio[962]: time="2024-07-01 14:23:40.792734085Z" level=info msg="Created container 80ba2de1a9989a8fab4559d13996447401349e0643fa3473c47c5c8409e8fba5: default/hello-world-app-86c47465fc-t4bld/hello-world-app" id=6c96907b-5d9a-4ab1-88db-dc2b2d061047 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 01 14:23:40 addons-929335 crio[962]: time="2024-07-01 14:23:40.793594383Z" level=info msg="Starting container: 80ba2de1a9989a8fab4559d13996447401349e0643fa3473c47c5c8409e8fba5" id=fd463532-281e-4523-a2db-3864f58936fe name=/runtime.v1.RuntimeService/StartContainer
	Jul 01 14:23:40 addons-929335 crio[962]: time="2024-07-01 14:23:40.806541698Z" level=info msg="Started container" PID=8756 containerID=80ba2de1a9989a8fab4559d13996447401349e0643fa3473c47c5c8409e8fba5 description=default/hello-world-app-86c47465fc-t4bld/hello-world-app id=fd463532-281e-4523-a2db-3864f58936fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=c530951965dee914ec95026501306491d1537de7e0431f1fc696b64fc054906d
	Jul 01 14:23:40 addons-929335 conmon[8745]: conmon 80ba2de1a9989a8fab45 <ninfo>: container 8756 exited with status 1
	Jul 01 14:23:41 addons-929335 crio[962]: time="2024-07-01 14:23:41.754067019Z" level=info msg="Removing container: 1b3807076b581094994edfdb325b80931912c7fa3a039855597c48b7b582ab34" id=08632c95-4262-4477-9bd7-58843f0afa6e name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 01 14:23:41 addons-929335 crio[962]: time="2024-07-01 14:23:41.778645329Z" level=info msg="Removed container 1b3807076b581094994edfdb325b80931912c7fa3a039855597c48b7b582ab34: default/hello-world-app-86c47465fc-t4bld/hello-world-app" id=08632c95-4262-4477-9bd7-58843f0afa6e name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 01 14:23:42 addons-929335 crio[962]: time="2024-07-01 14:23:42.451750746Z" level=info msg="Stopping container: 607a7ee7972a3e2b75633ae9315b1f4a1764538ddd0aded948e672d6f5e99116 (timeout: 2s)" id=d1564689-191c-4837-9c10-c6827036133a name=/runtime.v1.RuntimeService/StopContainer
	Jul 01 14:23:44 addons-929335 crio[962]: time="2024-07-01 14:23:44.457348420Z" level=warning msg="Stopping container 607a7ee7972a3e2b75633ae9315b1f4a1764538ddd0aded948e672d6f5e99116 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=d1564689-191c-4837-9c10-c6827036133a name=/runtime.v1.RuntimeService/StopContainer
	Jul 01 14:23:44 addons-929335 conmon[4929]: conmon 607a7ee7972a3e2b7563 <ninfo>: container 4940 exited with status 137
	Jul 01 14:23:44 addons-929335 crio[962]: time="2024-07-01 14:23:44.591876574Z" level=info msg="Stopped container 607a7ee7972a3e2b75633ae9315b1f4a1764538ddd0aded948e672d6f5e99116: ingress-nginx/ingress-nginx-controller-768f948f8f-c7846/controller" id=d1564689-191c-4837-9c10-c6827036133a name=/runtime.v1.RuntimeService/StopContainer
	Jul 01 14:23:44 addons-929335 crio[962]: time="2024-07-01 14:23:44.592384550Z" level=info msg="Stopping pod sandbox: 4ea9091ce2577ae3a3b51accf2c59b1ff1249cf54b3a141e2f1e9462c56c9b90" id=94959906-89e5-49cf-8b54-7d909072df89 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 01 14:23:44 addons-929335 crio[962]: time="2024-07-01 14:23:44.595762680Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-AUWIQPTSEN2TX7I6 - [0:0]\n:KUBE-HP-AAYANBQW3NESDGN2 - [0:0]\n-X KUBE-HP-AAYANBQW3NESDGN2\n-X KUBE-HP-AUWIQPTSEN2TX7I6\nCOMMIT\n"
	Jul 01 14:23:44 addons-929335 crio[962]: time="2024-07-01 14:23:44.610403639Z" level=info msg="Closing host port tcp:80"
	Jul 01 14:23:44 addons-929335 crio[962]: time="2024-07-01 14:23:44.610455225Z" level=info msg="Closing host port tcp:443"
	Jul 01 14:23:44 addons-929335 crio[962]: time="2024-07-01 14:23:44.611855057Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 01 14:23:44 addons-929335 crio[962]: time="2024-07-01 14:23:44.611884686Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 01 14:23:44 addons-929335 crio[962]: time="2024-07-01 14:23:44.612057053Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-768f948f8f-c7846 Namespace:ingress-nginx ID:4ea9091ce2577ae3a3b51accf2c59b1ff1249cf54b3a141e2f1e9462c56c9b90 UID:f7aef7c5-db2f-4e8d-a121-42eaf3f67850 NetNS:/var/run/netns/214eb587-675b-4691-99c2-b9d59201645e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 01 14:23:44 addons-929335 crio[962]: time="2024-07-01 14:23:44.612197444Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-768f948f8f-c7846 from CNI network \"kindnet\" (type=ptp)"
	Jul 01 14:23:44 addons-929335 crio[962]: time="2024-07-01 14:23:44.634823302Z" level=info msg="Stopped pod sandbox: 4ea9091ce2577ae3a3b51accf2c59b1ff1249cf54b3a141e2f1e9462c56c9b90" id=94959906-89e5-49cf-8b54-7d909072df89 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 01 14:23:44 addons-929335 crio[962]: time="2024-07-01 14:23:44.762328821Z" level=info msg="Removing container: 607a7ee7972a3e2b75633ae9315b1f4a1764538ddd0aded948e672d6f5e99116" id=81418592-6629-4ec0-b693-5d24ee06857a name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 01 14:23:44 addons-929335 crio[962]: time="2024-07-01 14:23:44.779026865Z" level=info msg="Removed container 607a7ee7972a3e2b75633ae9315b1f4a1764538ddd0aded948e672d6f5e99116: ingress-nginx/ingress-nginx-controller-768f948f8f-c7846/controller" id=81418592-6629-4ec0-b693-5d24ee06857a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	80ba2de1a9989       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             8 seconds ago       Exited              hello-world-app           2                   c530951965dee       hello-world-app-86c47465fc-t4bld
	8c9113c9643e8       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:bda802dd37a41ba160bf10134538fd1a1ce05efcc14ab4c38b5f6b1e6dccd734            50 seconds ago      Exited              gadget                    6                   ceb3c146cda59       gadget-8j46m
	816a344cbbf06       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                              2 minutes ago       Running             nginx                     0                   4b4737705ddc8       nginx
	1726107bb7a18       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                        4 minutes ago       Running             headlamp                  0                   0beb7a74486ef       headlamp-7867546754-jbhwq
	bcb9f3a8b177e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 5 minutes ago       Running             gcp-auth                  0                   937faf0acb5f8       gcp-auth-5db96cd9b4-zzdzf
	f4f2268f451b3       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                              6 minutes ago       Running             yakd                      0                   ad1a1e08aa277       yakd-dashboard-799879c74f-k9fkr
	9d1efded80093       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   6 minutes ago       Exited              patch                     0                   0bef94b2fe69f       ingress-nginx-admission-patch-jmhdb
	e0fb5c63dd33a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   6 minutes ago       Exited              create                    0                   40813e8717ef5       ingress-nginx-admission-create-wgq5c
	e9ea99849b7a1       registry.k8s.io/metrics-server/metrics-server@sha256:7f0fc3565b6d4655d078bb8e250d0423d7c79aeb05fbc71e1ffa6ff664264d70        6 minutes ago       Running             metrics-server            0                   316c63aa6830a       metrics-server-c59844bb4-7ddxq
	c7a57f061ff4a       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                                             6 minutes ago       Running             coredns                   0                   e8dd3c5d29672       coredns-7db6d8ff4d-s8jw9
	cd615c19161cd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             6 minutes ago       Running             storage-provisioner       0                   fc87ffa26b343       storage-provisioner
	dafa28039c484       66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae                                                             7 minutes ago       Running             kube-proxy                0                   1892d9010797f       kube-proxy-b7sh5
	db206e1b79fd3       89d73d416b992e8f9602b67b4614d9e7f0655aebb3696e18efec695e0b654c40                                                             7 minutes ago       Running             kindnet-cni               0                   0634e6cdeec1f       kindnet-nzscv
	a8156a2a69e7a       84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0                                                             7 minutes ago       Running             kube-apiserver            0                   b6e2dbefac823       kube-apiserver-addons-929335
	a5290b2c5513d       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                                             7 minutes ago       Running             etcd                      0                   77244071fc809       etcd-addons-929335
	646ad903c2a53       e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567                                                             7 minutes ago       Running             kube-controller-manager   0                   f8a7cc5b1d3cf       kube-controller-manager-addons-929335
	f433fbd81a7c4       c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5                                                             7 minutes ago       Running             kube-scheduler            0                   751716a05cad0       kube-scheduler-addons-929335
	
	
	==> coredns [c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a] <==
	[INFO] 10.244.0.19:40367 - 9202 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000372369s
	[INFO] 10.244.0.19:60297 - 58306 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002308197s
	[INFO] 10.244.0.19:40367 - 22993 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002149033s
	[INFO] 10.244.0.19:40367 - 14452 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002168922s
	[INFO] 10.244.0.19:60297 - 47707 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002573265s
	[INFO] 10.244.0.19:40367 - 63736 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000116571s
	[INFO] 10.244.0.19:60297 - 39186 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000059915s
	[INFO] 10.244.0.19:47259 - 19264 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000101367s
	[INFO] 10.244.0.19:56453 - 43442 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000054958s
	[INFO] 10.244.0.19:56453 - 4050 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00006186s
	[INFO] 10.244.0.19:47259 - 44322 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000066938s
	[INFO] 10.244.0.19:56453 - 55262 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055795s
	[INFO] 10.244.0.19:47259 - 53949 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051873s
	[INFO] 10.244.0.19:56453 - 59506 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00006478s
	[INFO] 10.244.0.19:47259 - 52115 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000047754s
	[INFO] 10.244.0.19:56453 - 14654 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050274s
	[INFO] 10.244.0.19:56453 - 61532 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00006062s
	[INFO] 10.244.0.19:47259 - 18474 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000080632s
	[INFO] 10.244.0.19:47259 - 19652 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000075119s
	[INFO] 10.244.0.19:56453 - 25691 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001443443s
	[INFO] 10.244.0.19:47259 - 43778 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.0010961s
	[INFO] 10.244.0.19:56453 - 43564 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001001527s
	[INFO] 10.244.0.19:56453 - 19462 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000067693s
	[INFO] 10.244.0.19:47259 - 46001 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001038565s
	[INFO] 10.244.0.19:47259 - 13415 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000054917s
	
	
	==> describe nodes <==
	Name:               addons-929335
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-929335
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=addons-929335
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_01T14_16_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-929335
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 14:16:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-929335
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 14:23:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 14:21:27 +0000   Mon, 01 Jul 2024 14:16:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 14:21:27 +0000   Mon, 01 Jul 2024 14:16:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 14:21:27 +0000   Mon, 01 Jul 2024 14:16:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 14:21:27 +0000   Mon, 01 Jul 2024 14:17:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-929335
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9d0ede6bb1d4bb381c7b3fce060be76
	  System UUID:                fcb5e7bf-e654-480c-840b-846ff4889ec5
	  Boot ID:                    030faa4f-44aa-434e-978f-182f6d212f48
	  Kernel Version:             5.15.0-1063-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-t4bld         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  gadget                      gadget-8j46m                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	  gcp-auth                    gcp-auth-5db96cd9b4-zzdzf                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m5s
	  headlamp                    headlamp-7867546754-jbhwq                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 coredns-7db6d8ff4d-s8jw9                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     7m16s
	  kube-system                 etcd-addons-929335                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         7m30s
	  kube-system                 kindnet-nzscv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m16s
	  kube-system                 kube-apiserver-addons-929335             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 kube-controller-manager-addons-929335    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m30s
	  kube-system                 kube-proxy-b7sh5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m16s
	  kube-system                 kube-scheduler-addons-929335             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m30s
	  kube-system                 metrics-server-c59844bb4-7ddxq           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m11s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	  yakd-dashboard              yakd-dashboard-799879c74f-k9fkr          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     7m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             548Mi (6%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m10s  kube-proxy       
	  Normal  Starting                 7m30s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m30s  kubelet          Node addons-929335 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m30s  kubelet          Node addons-929335 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m30s  kubelet          Node addons-929335 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m16s  node-controller  Node addons-929335 event: Registered Node addons-929335 in Controller
	  Normal  NodeReady                6m42s  kubelet          Node addons-929335 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001028] FS-Cache: O-key=[8] '8b8e3b0000000000'
	[  +0.000694] FS-Cache: N-cookie c=000001e0 [p=000001d7 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000ac0c5ba0{9p.inode} n=00000000045eedfb
	[  +0.001018] FS-Cache: N-key=[8] '8b8e3b0000000000'
	[  +0.014530] FS-Cache: Duplicate cookie detected
	[  +0.000695] FS-Cache: O-cookie c=000001da [p=000001d7 fl=226 nc=0 na=1]
	[  +0.000939] FS-Cache: O-cookie d=00000000ac0c5ba0{9p.inode} n=000000001f9e9a8e
	[  +0.001023] FS-Cache: O-key=[8] '8b8e3b0000000000'
	[  +0.000689] FS-Cache: N-cookie c=000001e1 [p=000001d7 fl=2 nc=0 na=1]
	[  +0.000935] FS-Cache: N-cookie d=00000000ac0c5ba0{9p.inode} n=00000000c4ff6e50
	[  +0.001026] FS-Cache: N-key=[8] '8b8e3b0000000000'
	[  +2.755378] FS-Cache: Duplicate cookie detected
	[  +0.000724] FS-Cache: O-cookie c=000001d8 [p=000001d7 fl=226 nc=0 na=1]
	[  +0.000958] FS-Cache: O-cookie d=00000000ac0c5ba0{9p.inode} n=00000000dd0e7f7e
	[  +0.001033] FS-Cache: O-key=[8] '8a8e3b0000000000'
	[  +0.000734] FS-Cache: N-cookie c=000001e3 [p=000001d7 fl=2 nc=0 na=1]
	[  +0.000927] FS-Cache: N-cookie d=00000000ac0c5ba0{9p.inode} n=00000000045eedfb
	[  +0.001022] FS-Cache: N-key=[8] '8a8e3b0000000000'
	[  +0.295007] FS-Cache: Duplicate cookie detected
	[  +0.000757] FS-Cache: O-cookie c=000001dd [p=000001d7 fl=226 nc=0 na=1]
	[  +0.000956] FS-Cache: O-cookie d=00000000ac0c5ba0{9p.inode} n=000000002ac53bcf
	[  +0.001042] FS-Cache: O-key=[8] '908e3b0000000000'
	[  +0.000722] FS-Cache: N-cookie c=000001e4 [p=000001d7 fl=2 nc=0 na=1]
	[  +0.000933] FS-Cache: N-cookie d=00000000ac0c5ba0{9p.inode} n=00000000dca0f41c
	[  +0.001038] FS-Cache: N-key=[8] '908e3b0000000000'
	
	
	==> etcd [a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025] <==
	{"level":"info","ts":"2024-07-01T14:16:13.257178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-01T14:16:13.257188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-07-01T14:16:13.257196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-01T14:16:13.26124Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-929335 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-01T14:16:13.265123Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-01T14:16:13.265282Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-01T14:16:13.265578Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-01T14:16:13.267232Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-01T14:16:13.270535Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-01T14:16:13.281377Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-01T14:16:13.281949Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-01T14:16:13.281444Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-01T14:16:13.324782Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-01T14:16:13.324898Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-01T14:16:36.435212Z","caller":"traceutil/trace.go:171","msg":"trace[1642189225] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"108.969832ms","start":"2024-07-01T14:16:36.32622Z","end":"2024-07-01T14:16:36.43519Z","steps":["trace[1642189225] 'process raft request'  (duration: 11.836684ms)","trace[1642189225] 'compare'  (duration: 81.260435ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-01T14:16:36.538281Z","caller":"traceutil/trace.go:171","msg":"trace[1453600289] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"121.328087ms","start":"2024-07-01T14:16:36.416937Z","end":"2024-07-01T14:16:36.538265Z","steps":["trace[1453600289] 'process raft request'  (duration: 121.206797ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-01T14:16:37.229631Z","caller":"traceutil/trace.go:171","msg":"trace[437676447] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"134.004714ms","start":"2024-07-01T14:16:37.095608Z","end":"2024-07-01T14:16:37.229613Z","steps":["trace[437676447] 'process raft request'  (duration: 97.681552ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-01T14:16:37.232918Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.182642ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-01T14:16:37.233096Z","caller":"traceutil/trace.go:171","msg":"trace[1873488679] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:371; }","duration":"137.363239ms","start":"2024-07-01T14:16:37.095716Z","end":"2024-07-01T14:16:37.233079Z","steps":["trace[1873488679] 'agreement among raft nodes before linearized reading'  (duration: 137.152709ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-01T14:16:37.233547Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.481765ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-929335\" ","response":"range_response_count:1 size:5744"}
	{"level":"info","ts":"2024-07-01T14:16:37.239983Z","caller":"traceutil/trace.go:171","msg":"trace[562760670] range","detail":"{range_begin:/registry/minions/addons-929335; range_end:; response_count:1; response_revision:371; }","duration":"137.931072ms","start":"2024-07-01T14:16:37.095688Z","end":"2024-07-01T14:16:37.233619Z","steps":["trace[562760670] 'agreement among raft nodes before linearized reading'  (duration: 97.703821ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-01T14:16:37.633465Z","caller":"traceutil/trace.go:171","msg":"trace[1799736065] linearizableReadLoop","detail":"{readStateIndex:385; appliedIndex:384; }","duration":"103.3815ms","start":"2024-07-01T14:16:37.530067Z","end":"2024-07-01T14:16:37.633448Z","steps":["trace[1799736065] 'read index received'  (duration: 42.997053ms)","trace[1799736065] 'applied index is now lower than readState.Index'  (duration: 60.383503ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-01T14:16:37.633621Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.536979ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2024-07-01T14:16:37.633644Z","caller":"traceutil/trace.go:171","msg":"trace[393152311] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:374; }","duration":"103.576086ms","start":"2024-07-01T14:16:37.530061Z","end":"2024-07-01T14:16:37.633637Z","steps":["trace[393152311] 'agreement among raft nodes before linearized reading'  (duration: 103.468106ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-01T14:16:37.633847Z","caller":"traceutil/trace.go:171","msg":"trace[1893789424] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"131.925726ms","start":"2024-07-01T14:16:37.501909Z","end":"2024-07-01T14:16:37.633835Z","steps":["trace[1893789424] 'process raft request'  (duration: 71.148168ms)","trace[1893789424] 'compare'  (duration: 60.270312ms)"],"step_count":2}
	
	
	==> gcp-auth [bcb9f3a8b177e26d49ce5f5b002574f3acd6226509cf5871f475678d5732846c] <==
	2024/07/01 14:18:14 GCP Auth Webhook started!
	2024/07/01 14:19:03 Ready to marshal response ...
	2024/07/01 14:19:03 Ready to write response ...
	2024/07/01 14:19:03 Ready to marshal response ...
	2024/07/01 14:19:03 Ready to write response ...
	2024/07/01 14:19:03 Ready to marshal response ...
	2024/07/01 14:19:03 Ready to write response ...
	2024/07/01 14:19:14 Ready to marshal response ...
	2024/07/01 14:19:14 Ready to write response ...
	2024/07/01 14:19:20 Ready to marshal response ...
	2024/07/01 14:19:20 Ready to write response ...
	2024/07/01 14:19:20 Ready to marshal response ...
	2024/07/01 14:19:20 Ready to write response ...
	2024/07/01 14:19:27 Ready to marshal response ...
	2024/07/01 14:19:27 Ready to write response ...
	2024/07/01 14:20:14 Ready to marshal response ...
	2024/07/01 14:20:14 Ready to write response ...
	2024/07/01 14:20:46 Ready to marshal response ...
	2024/07/01 14:20:46 Ready to write response ...
	2024/07/01 14:21:03 Ready to marshal response ...
	2024/07/01 14:21:03 Ready to write response ...
	2024/07/01 14:23:23 Ready to marshal response ...
	2024/07/01 14:23:23 Ready to write response ...
	
	
	==> kernel <==
	 14:23:49 up 1 day, 22:06,  0 users,  load average: 0.32, 0.92, 1.75
	Linux addons-929335 5.15.0-1063-aws #69~20.04.1-Ubuntu SMP Fri May 10 19:21:30 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3] <==
	I0701 14:21:48.013577       1 main.go:227] handling current node
	I0701 14:21:58.023509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:21:58.023539       1 main.go:227] handling current node
	I0701 14:22:08.027245       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:22:08.027271       1 main.go:227] handling current node
	I0701 14:22:18.036510       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:22:18.036539       1 main.go:227] handling current node
	I0701 14:22:28.049001       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:22:28.049165       1 main.go:227] handling current node
	I0701 14:22:38.053391       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:22:38.053418       1 main.go:227] handling current node
	I0701 14:22:48.063560       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:22:48.063599       1 main.go:227] handling current node
	I0701 14:22:58.069377       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:22:58.069406       1 main.go:227] handling current node
	I0701 14:23:08.080845       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:23:08.080967       1 main.go:227] handling current node
	I0701 14:23:18.087285       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:23:18.087321       1 main.go:227] handling current node
	I0701 14:23:28.093401       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:23:28.093447       1 main.go:227] handling current node
	I0701 14:23:38.097273       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:23:38.097305       1 main.go:227] handling current node
	I0701 14:23:48.109727       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:23:48.109758       1 main.go:227] handling current node
	
	
	==> kube-apiserver [a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657] <==
	I0701 14:18:29.694200       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0701 14:18:33.700106       1 handler_proxy.go:93] no RequestInfo found in the context
	E0701 14:18:33.700239       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0701 14:18:33.700146       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.85.235:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.85.235:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.85.235:443: i/o timeout
	I0701 14:18:33.755720       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0701 14:18:33.768459       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0701 14:19:03.660593       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.175.140"}
	E0701 14:19:43.841124       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0701 14:20:25.663964       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0701 14:21:02.326603       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0701 14:21:02.326753       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0701 14:21:02.356744       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0701 14:21:02.356790       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0701 14:21:02.377756       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0701 14:21:02.377884       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0701 14:21:02.416248       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0701 14:21:02.416924       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0701 14:21:02.915268       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0701 14:21:03.219137       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.184.120"}
	W0701 14:21:03.366073       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0701 14:21:03.417317       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0701 14:21:03.422258       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0701 14:23:24.106164       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.179.149"}
	E0701 14:23:40.785698       1 watch.go:250] http2: stream closed
	
	
	==> kube-controller-manager [646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393] <==
	W0701 14:22:26.918118       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0701 14:22:26.918153       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0701 14:22:27.650398       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0701 14:22:27.650435       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0701 14:23:00.835931       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0701 14:23:00.836054       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0701 14:23:06.999304       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0701 14:23:06.999344       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0701 14:23:09.431841       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0701 14:23:09.431966       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0701 14:23:23.880708       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="50.712538ms"
	I0701 14:23:23.890842       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="9.145926ms"
	I0701 14:23:23.890924       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="42.856µs"
	I0701 14:23:23.921378       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="42.7µs"
	I0701 14:23:26.735385       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="48.575µs"
	I0701 14:23:27.742874       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="42.503µs"
	I0701 14:23:28.734756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="43.504µs"
	I0701 14:23:41.422303       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0701 14:23:41.426881       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0701 14:23:41.427002       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="4.013µs"
	I0701 14:23:41.772300       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="45.4µs"
	W0701 14:23:43.933603       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0701 14:23:43.933641       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0701 14:23:44.741202       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0701 14:23:44.741241       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba] <==
	I0701 14:16:39.287139       1 server_linux.go:69] "Using iptables proxy"
	I0701 14:16:39.484252       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0701 14:16:39.736474       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0701 14:16:39.736537       1 server_linux.go:165] "Using iptables Proxier"
	I0701 14:16:39.774595       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0701 14:16:39.774697       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0701 14:16:39.774745       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0701 14:16:39.775014       1 server.go:872] "Version info" version="v1.30.2"
	I0701 14:16:39.775080       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 14:16:39.819782       1 config.go:192] "Starting service config controller"
	I0701 14:16:39.819877       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0701 14:16:39.820069       1 config.go:101] "Starting endpoint slice config controller"
	I0701 14:16:39.847398       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0701 14:16:39.820539       1 config.go:319] "Starting node config controller"
	I0701 14:16:39.850820       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0701 14:16:39.945864       1 shared_informer.go:320] Caches are synced for service config
	I0701 14:16:39.953112       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0701 14:16:39.960916       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8] <==
	W0701 14:16:17.168361       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0701 14:16:17.168381       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0701 14:16:17.168429       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 14:16:17.168444       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0701 14:16:17.168488       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0701 14:16:17.168505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0701 14:16:17.168595       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0701 14:16:17.168609       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0701 14:16:17.168643       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0701 14:16:17.168695       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0701 14:16:17.168782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 14:16:17.168801       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0701 14:16:17.168860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0701 14:16:17.168876       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0701 14:16:17.997209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 14:16:17.997337       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0701 14:16:18.020656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0701 14:16:18.021351       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0701 14:16:18.242822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 14:16:18.242975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0701 14:16:18.276477       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0701 14:16:18.276586       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0701 14:16:18.279444       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0701 14:16:18.279587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0701 14:16:18.761837       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 01 14:23:35 addons-929335 kubelet[1552]: E0701 14:23:35.692469    1552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(25af24ab-7674-4c32-b452-00053e068d4c)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="25af24ab-7674-4c32-b452-00053e068d4c"
	Jul 01 14:23:39 addons-929335 kubelet[1552]: I0701 14:23:39.976784    1552 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgt68\" (UniqueName: \"kubernetes.io/projected/25af24ab-7674-4c32-b452-00053e068d4c-kube-api-access-bgt68\") pod \"25af24ab-7674-4c32-b452-00053e068d4c\" (UID: \"25af24ab-7674-4c32-b452-00053e068d4c\") "
	Jul 01 14:23:39 addons-929335 kubelet[1552]: I0701 14:23:39.978884    1552 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25af24ab-7674-4c32-b452-00053e068d4c-kube-api-access-bgt68" (OuterVolumeSpecName: "kube-api-access-bgt68") pod "25af24ab-7674-4c32-b452-00053e068d4c" (UID: "25af24ab-7674-4c32-b452-00053e068d4c"). InnerVolumeSpecName "kube-api-access-bgt68". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 01 14:23:40 addons-929335 kubelet[1552]: I0701 14:23:40.077456    1552 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bgt68\" (UniqueName: \"kubernetes.io/projected/25af24ab-7674-4c32-b452-00053e068d4c-kube-api-access-bgt68\") on node \"addons-929335\" DevicePath \"\""
	Jul 01 14:23:40 addons-929335 kubelet[1552]: I0701 14:23:40.692575    1552 scope.go:117] "RemoveContainer" containerID="1b3807076b581094994edfdb325b80931912c7fa3a039855597c48b7b582ab34"
	Jul 01 14:23:40 addons-929335 kubelet[1552]: I0701 14:23:40.747870    1552 scope.go:117] "RemoveContainer" containerID="c012c866d3d0c59042b2cc7465b61255f4f049c6aada67c2221dfed9736cf044"
	Jul 01 14:23:41 addons-929335 kubelet[1552]: I0701 14:23:41.691880    1552 scope.go:117] "RemoveContainer" containerID="8c9113c9643e879eb9692538390f9ea69d3fbd413925c088565df60cfc15318d"
	Jul 01 14:23:41 addons-929335 kubelet[1552]: E0701 14:23:41.692335    1552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-8j46m_gadget(af7151fd-575f-412e-84ee-483ab9498590)\"" pod="gadget/gadget-8j46m" podUID="af7151fd-575f-412e-84ee-483ab9498590"
	Jul 01 14:23:41 addons-929335 kubelet[1552]: I0701 14:23:41.693149    1552 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20a534a5-31a0-421a-9877-934d44bc67cc" path="/var/lib/kubelet/pods/20a534a5-31a0-421a-9877-934d44bc67cc/volumes"
	Jul 01 14:23:41 addons-929335 kubelet[1552]: I0701 14:23:41.693573    1552 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25af24ab-7674-4c32-b452-00053e068d4c" path="/var/lib/kubelet/pods/25af24ab-7674-4c32-b452-00053e068d4c/volumes"
	Jul 01 14:23:41 addons-929335 kubelet[1552]: I0701 14:23:41.693989    1552 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e03b1f2-3e0a-4213-ae10-c36811ec3933" path="/var/lib/kubelet/pods/3e03b1f2-3e0a-4213-ae10-c36811ec3933/volumes"
	Jul 01 14:23:41 addons-929335 kubelet[1552]: I0701 14:23:41.752269    1552 scope.go:117] "RemoveContainer" containerID="1b3807076b581094994edfdb325b80931912c7fa3a039855597c48b7b582ab34"
	Jul 01 14:23:41 addons-929335 kubelet[1552]: I0701 14:23:41.752475    1552 scope.go:117] "RemoveContainer" containerID="80ba2de1a9989a8fab4559d13996447401349e0643fa3473c47c5c8409e8fba5"
	Jul 01 14:23:41 addons-929335 kubelet[1552]: E0701 14:23:41.752728    1552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-t4bld_default(df186926-03b3-4ae0-a031-91f5b7ef161d)\"" pod="default/hello-world-app-86c47465fc-t4bld" podUID="df186926-03b3-4ae0-a031-91f5b7ef161d"
	Jul 01 14:23:44 addons-929335 kubelet[1552]: I0701 14:23:44.710377    1552 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnr65\" (UniqueName: \"kubernetes.io/projected/f7aef7c5-db2f-4e8d-a121-42eaf3f67850-kube-api-access-wnr65\") pod \"f7aef7c5-db2f-4e8d-a121-42eaf3f67850\" (UID: \"f7aef7c5-db2f-4e8d-a121-42eaf3f67850\") "
	Jul 01 14:23:44 addons-929335 kubelet[1552]: I0701 14:23:44.710430    1552 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f7aef7c5-db2f-4e8d-a121-42eaf3f67850-webhook-cert\") pod \"f7aef7c5-db2f-4e8d-a121-42eaf3f67850\" (UID: \"f7aef7c5-db2f-4e8d-a121-42eaf3f67850\") "
	Jul 01 14:23:44 addons-929335 kubelet[1552]: I0701 14:23:44.712894    1552 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7aef7c5-db2f-4e8d-a121-42eaf3f67850-kube-api-access-wnr65" (OuterVolumeSpecName: "kube-api-access-wnr65") pod "f7aef7c5-db2f-4e8d-a121-42eaf3f67850" (UID: "f7aef7c5-db2f-4e8d-a121-42eaf3f67850"). InnerVolumeSpecName "kube-api-access-wnr65". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 01 14:23:44 addons-929335 kubelet[1552]: I0701 14:23:44.717666    1552 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7aef7c5-db2f-4e8d-a121-42eaf3f67850-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f7aef7c5-db2f-4e8d-a121-42eaf3f67850" (UID: "f7aef7c5-db2f-4e8d-a121-42eaf3f67850"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 01 14:23:44 addons-929335 kubelet[1552]: I0701 14:23:44.760603    1552 scope.go:117] "RemoveContainer" containerID="607a7ee7972a3e2b75633ae9315b1f4a1764538ddd0aded948e672d6f5e99116"
	Jul 01 14:23:44 addons-929335 kubelet[1552]: I0701 14:23:44.779275    1552 scope.go:117] "RemoveContainer" containerID="607a7ee7972a3e2b75633ae9315b1f4a1764538ddd0aded948e672d6f5e99116"
	Jul 01 14:23:44 addons-929335 kubelet[1552]: E0701 14:23:44.779714    1552 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"607a7ee7972a3e2b75633ae9315b1f4a1764538ddd0aded948e672d6f5e99116\": container with ID starting with 607a7ee7972a3e2b75633ae9315b1f4a1764538ddd0aded948e672d6f5e99116 not found: ID does not exist" containerID="607a7ee7972a3e2b75633ae9315b1f4a1764538ddd0aded948e672d6f5e99116"
	Jul 01 14:23:44 addons-929335 kubelet[1552]: I0701 14:23:44.779751    1552 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"607a7ee7972a3e2b75633ae9315b1f4a1764538ddd0aded948e672d6f5e99116"} err="failed to get container status \"607a7ee7972a3e2b75633ae9315b1f4a1764538ddd0aded948e672d6f5e99116\": rpc error: code = NotFound desc = could not find container \"607a7ee7972a3e2b75633ae9315b1f4a1764538ddd0aded948e672d6f5e99116\": container with ID starting with 607a7ee7972a3e2b75633ae9315b1f4a1764538ddd0aded948e672d6f5e99116 not found: ID does not exist"
	Jul 01 14:23:44 addons-929335 kubelet[1552]: I0701 14:23:44.811470    1552 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wnr65\" (UniqueName: \"kubernetes.io/projected/f7aef7c5-db2f-4e8d-a121-42eaf3f67850-kube-api-access-wnr65\") on node \"addons-929335\" DevicePath \"\""
	Jul 01 14:23:44 addons-929335 kubelet[1552]: I0701 14:23:44.811514    1552 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f7aef7c5-db2f-4e8d-a121-42eaf3f67850-webhook-cert\") on node \"addons-929335\" DevicePath \"\""
	Jul 01 14:23:45 addons-929335 kubelet[1552]: I0701 14:23:45.693289    1552 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7aef7c5-db2f-4e8d-a121-42eaf3f67850" path="/var/lib/kubelet/pods/f7aef7c5-db2f-4e8d-a121-42eaf3f67850/volumes"
	
	
	==> storage-provisioner [cd615c19161cd88f920f62f148cffc09c7eb70fe165441223629793a8598b765] <==
	I0701 14:17:09.154063       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0701 14:17:09.174307       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0701 14:17:09.174358       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0701 14:17:09.189621       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0701 14:17:09.189941       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-929335_467bfbe3-cc7a-4f06-8aff-686955b35647!
	I0701 14:17:09.191291       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8cb3561d-e28c-48ec-8580-912e5e2662a2", APIVersion:"v1", ResourceVersion:"897", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-929335_467bfbe3-cc7a-4f06-8aff-686955b35647 became leader
	I0701 14:17:09.290090       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-929335_467bfbe3-cc7a-4f06-8aff-686955b35647!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-929335 -n addons-929335
helpers_test.go:261: (dbg) Run:  kubectl --context addons-929335 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (168.35s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (310.22s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.619902ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-7ddxq" [d044ed9e-3f07-4293-b20a-7710385bba17] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.013574018s
addons_test.go:417: (dbg) Run:  kubectl --context addons-929335 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-929335 top pods -n kube-system: exit status 1 (109.340131ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s8jw9, age: 3m3.120523067s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-929335 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-929335 top pods -n kube-system: exit status 1 (92.677215ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s8jw9, age: 3m6.119687579s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-929335 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-929335 top pods -n kube-system: exit status 1 (109.386474ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s8jw9, age: 3m12.321477343s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-929335 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-929335 top pods -n kube-system: exit status 1 (98.494186ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s8jw9, age: 3m19.092301638s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-929335 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-929335 top pods -n kube-system: exit status 1 (239.556405ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s8jw9, age: 3m27.305917544s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-929335 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-929335 top pods -n kube-system: exit status 1 (95.248565ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s8jw9, age: 3m39.9895996s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-929335 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-929335 top pods -n kube-system: exit status 1 (98.707619ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s8jw9, age: 4m12.677515747s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-929335 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-929335 top pods -n kube-system: exit status 1 (90.26718ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s8jw9, age: 4m34.912664445s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-929335 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-929335 top pods -n kube-system: exit status 1 (96.574223ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s8jw9, age: 5m22.03073048s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-929335 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-929335 top pods -n kube-system: exit status 1 (84.896218ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s8jw9, age: 6m14.589871653s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-929335 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-929335 top pods -n kube-system: exit status 1 (85.616945ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s8jw9, age: 7m29.37601593s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-929335 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-929335 top pods -n kube-system: exit status 1 (92.23416ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-s8jw9, age: 8m5.083888769s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-929335 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-929335
helpers_test.go:235: (dbg) docker inspect addons-929335:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "65d5b1b0f7f0a682360538e2c6e7aef6cb9883c8df68d86ed4a42999dd208771",
	        "Created": "2024-07-01T14:15:52.872318521Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3714989,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-01T14:15:53.002432627Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cf53f54b1bed0b432ebf08c6ac817bec062867b90e25c5452b8e7c3276a7ff",
	        "ResolvConfPath": "/var/lib/docker/containers/65d5b1b0f7f0a682360538e2c6e7aef6cb9883c8df68d86ed4a42999dd208771/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/65d5b1b0f7f0a682360538e2c6e7aef6cb9883c8df68d86ed4a42999dd208771/hostname",
	        "HostsPath": "/var/lib/docker/containers/65d5b1b0f7f0a682360538e2c6e7aef6cb9883c8df68d86ed4a42999dd208771/hosts",
	        "LogPath": "/var/lib/docker/containers/65d5b1b0f7f0a682360538e2c6e7aef6cb9883c8df68d86ed4a42999dd208771/65d5b1b0f7f0a682360538e2c6e7aef6cb9883c8df68d86ed4a42999dd208771-json.log",
	        "Name": "/addons-929335",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-929335:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-929335",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/df5a9c6cc6b85d0a05bebe77804bb3f6909353b546779825eac1ac22d05fbeca-init/diff:/var/lib/docker/overlay2/c3139abb5cf1c83f6f12f6a5f4a9c8df468321ed41d6e455d104ebf4c7d8657d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/df5a9c6cc6b85d0a05bebe77804bb3f6909353b546779825eac1ac22d05fbeca/merged",
	                "UpperDir": "/var/lib/docker/overlay2/df5a9c6cc6b85d0a05bebe77804bb3f6909353b546779825eac1ac22d05fbeca/diff",
	                "WorkDir": "/var/lib/docker/overlay2/df5a9c6cc6b85d0a05bebe77804bb3f6909353b546779825eac1ac22d05fbeca/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-929335",
	                "Source": "/var/lib/docker/volumes/addons-929335/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-929335",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-929335",
	                "name.minikube.sigs.k8s.io": "addons-929335",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "91b7a1a3f91955bf00fc9baba8b0810bc5106999d954fb2e040046ed7247965a",
	            "SandboxKey": "/var/run/docker/netns/91b7a1a3f919",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33903"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-929335": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "259bced3400858624c0bb065c3728c922a312012e2621de527c2f60710e627ba",
	                    "EndpointID": "4acca51483063e8b4c164e723fcc1e17a204ecb0fe40713da9b89841755a0227",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-929335",
	                        "65d5b1b0f7f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-929335 -n addons-929335
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-929335 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-929335 logs -n 25: (1.578790118s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-789626                                                                     | download-only-789626   | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC | 01 Jul 24 14:15 UTC |
	| delete  | -p download-only-281343                                                                     | download-only-281343   | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC | 01 Jul 24 14:15 UTC |
	| delete  | -p download-only-789626                                                                     | download-only-789626   | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC | 01 Jul 24 14:15 UTC |
	| start   | --download-only -p                                                                          | download-docker-822470 | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC |                     |
	|         | download-docker-822470                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-822470                                                                   | download-docker-822470 | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC | 01 Jul 24 14:15 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-147566   | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC |                     |
	|         | binary-mirror-147566                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:42755                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-147566                                                                     | binary-mirror-147566   | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC | 01 Jul 24 14:15 UTC |
	| addons  | disable dashboard -p                                                                        | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC |                     |
	|         | addons-929335                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC |                     |
	|         | addons-929335                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-929335 --wait=true                                                                | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC | 01 Jul 24 14:19 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:19 UTC | 01 Jul 24 14:19 UTC |
	|         | -p addons-929335                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-929335 ip                                                                            | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:19 UTC | 01 Jul 24 14:19 UTC |
	| addons  | addons-929335 addons disable                                                                | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:19 UTC | 01 Jul 24 14:19 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:19 UTC | 01 Jul 24 14:19 UTC |
	|         | -p addons-929335                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-929335 ssh cat                                                                       | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:19 UTC | 01 Jul 24 14:19 UTC |
	|         | /opt/local-path-provisioner/pvc-c612bd66-1de9-4129-954e-9710bab6cabd_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-929335 addons disable                                                                | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:19 UTC | 01 Jul 24 14:20 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:19 UTC | 01 Jul 24 14:19 UTC |
	|         | addons-929335                                                                               |                        |         |         |                     |                     |
	| addons  | addons-929335 addons                                                                        | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:20 UTC | 01 Jul 24 14:21 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-929335 addons                                                                        | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:21 UTC | 01 Jul 24 14:21 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-929335 ssh curl -s                                                                   | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:21 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-929335 ip                                                                            | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:23 UTC | 01 Jul 24 14:23 UTC |
	| addons  | addons-929335 addons disable                                                                | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:23 UTC | 01 Jul 24 14:23 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-929335 addons disable                                                                | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:23 UTC | 01 Jul 24 14:23 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:23 UTC | 01 Jul 24 14:24 UTC |
	|         | addons-929335                                                                               |                        |         |         |                     |                     |
	| addons  | addons-929335 addons                                                                        | addons-929335          | jenkins | v1.33.1 | 01 Jul 24 14:24 UTC | 01 Jul 24 14:24 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/01 14:15:28
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 14:15:28.076046 3714493 out.go:291] Setting OutFile to fd 1 ...
	I0701 14:15:28.076184 3714493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:15:28.076194 3714493 out.go:304] Setting ErrFile to fd 2...
	I0701 14:15:28.076199 3714493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:15:28.076488 3714493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-3708336/.minikube/bin
	I0701 14:15:28.076935 3714493 out.go:298] Setting JSON to false
	I0701 14:15:28.077905 3714493 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":165479,"bootTime":1719677849,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0701 14:15:28.077979 3714493 start.go:139] virtualization:  
	I0701 14:15:28.080375 3714493 out.go:177] * [addons-929335] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0701 14:15:28.082871 3714493 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 14:15:28.083067 3714493 notify.go:220] Checking for updates...
	I0701 14:15:28.087089 3714493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 14:15:28.088848 3714493 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19166-3708336/kubeconfig
	I0701 14:15:28.090827 3714493 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-3708336/.minikube
	I0701 14:15:28.092706 3714493 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0701 14:15:28.094518 3714493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 14:15:28.096503 3714493 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 14:15:28.126501 3714493 docker.go:122] docker version: linux-27.0.3:Docker Engine - Community
	I0701 14:15:28.126602 3714493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 14:15:28.182184 3714493 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-01 14:15:28.173117927 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0701 14:15:28.182300 3714493 docker.go:295] overlay module found
	I0701 14:15:28.185140 3714493 out.go:177] * Using the docker driver based on user configuration
	I0701 14:15:28.187141 3714493 start.go:297] selected driver: docker
	I0701 14:15:28.187158 3714493 start.go:901] validating driver "docker" against <nil>
	I0701 14:15:28.187172 3714493 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 14:15:28.187823 3714493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 14:15:28.236319 3714493 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-01 14:15:28.227485738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0701 14:15:28.236487 3714493 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 14:15:28.236711 3714493 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 14:15:28.238778 3714493 out.go:177] * Using Docker driver with root privileges
	I0701 14:15:28.240814 3714493 cni.go:84] Creating CNI manager for ""
	I0701 14:15:28.240836 3714493 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0701 14:15:28.240847 3714493 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0701 14:15:28.240929 3714493 start.go:340] cluster config:
	{Name:addons-929335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-929335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 14:15:28.243318 3714493 out.go:177] * Starting "addons-929335" primary control-plane node in "addons-929335" cluster
	I0701 14:15:28.245071 3714493 cache.go:121] Beginning downloading kic base image for docker with crio
	I0701 14:15:28.246947 3714493 out.go:177] * Pulling base image v0.0.44-1719413016-19142 ...
	I0701 14:15:28.249160 3714493 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0701 14:15:28.249212 3714493 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4
	I0701 14:15:28.249220 3714493 cache.go:56] Caching tarball of preloaded images
	I0701 14:15:28.249256 3714493 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local docker daemon
	I0701 14:15:28.249297 3714493 preload.go:173] Found /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0701 14:15:28.249306 3714493 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0701 14:15:28.249651 3714493 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/config.json ...
	I0701 14:15:28.249670 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/config.json: {Name:mkf278bfd2d5e50e84cb1fa4b086afbb0de93b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:15:28.266248 3714493 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d to local cache
	I0701 14:15:28.266359 3714493 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local cache directory
	I0701 14:15:28.266381 3714493 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local cache directory, skipping pull
	I0701 14:15:28.266387 3714493 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d exists in cache, skipping pull
	I0701 14:15:28.266395 3714493 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d as a tarball
	I0701 14:15:28.266400 3714493 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d from local cache
	I0701 14:15:44.884731 3714493 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d from cached tarball
	I0701 14:15:44.884774 3714493 cache.go:194] Successfully downloaded all kic artifacts
	I0701 14:15:44.884828 3714493 start.go:360] acquireMachinesLock for addons-929335: {Name:mka8f5764327253860363894b4c32861892f785a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 14:15:44.885530 3714493 start.go:364] duration metric: took 673.977µs to acquireMachinesLock for "addons-929335"
	I0701 14:15:44.885574 3714493 start.go:93] Provisioning new machine with config: &{Name:addons-929335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-929335 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0701 14:15:44.885672 3714493 start.go:125] createHost starting for "" (driver="docker")
	I0701 14:15:44.888037 3714493 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0701 14:15:44.888284 3714493 start.go:159] libmachine.API.Create for "addons-929335" (driver="docker")
	I0701 14:15:44.888327 3714493 client.go:168] LocalClient.Create starting
	I0701 14:15:44.888446 3714493 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem
	I0701 14:15:45.979184 3714493 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem
	I0701 14:15:46.514386 3714493 cli_runner.go:164] Run: docker network inspect addons-929335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0701 14:15:46.529891 3714493 cli_runner.go:211] docker network inspect addons-929335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0701 14:15:46.529971 3714493 network_create.go:284] running [docker network inspect addons-929335] to gather additional debugging logs...
	I0701 14:15:46.529992 3714493 cli_runner.go:164] Run: docker network inspect addons-929335
	W0701 14:15:46.544743 3714493 cli_runner.go:211] docker network inspect addons-929335 returned with exit code 1
	I0701 14:15:46.544780 3714493 network_create.go:287] error running [docker network inspect addons-929335]: docker network inspect addons-929335: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-929335 not found
	I0701 14:15:46.544794 3714493 network_create.go:289] output of [docker network inspect addons-929335]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-929335 not found
	
	** /stderr **
	I0701 14:15:46.544888 3714493 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 14:15:46.559405 3714493 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40006d6ce0}
	I0701 14:15:46.559447 3714493 network_create.go:124] attempt to create docker network addons-929335 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0701 14:15:46.559503 3714493 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-929335 addons-929335
	I0701 14:15:46.627461 3714493 network_create.go:108] docker network addons-929335 192.168.49.0/24 created
	I0701 14:15:46.627495 3714493 kic.go:121] calculated static IP "192.168.49.2" for the "addons-929335" container
	I0701 14:15:46.627572 3714493 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0701 14:15:46.642027 3714493 cli_runner.go:164] Run: docker volume create addons-929335 --label name.minikube.sigs.k8s.io=addons-929335 --label created_by.minikube.sigs.k8s.io=true
	I0701 14:15:46.657570 3714493 oci.go:103] Successfully created a docker volume addons-929335
	I0701 14:15:46.657677 3714493 cli_runner.go:164] Run: docker run --rm --name addons-929335-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-929335 --entrypoint /usr/bin/test -v addons-929335:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d -d /var/lib
	I0701 14:15:48.682208 3714493 cli_runner.go:217] Completed: docker run --rm --name addons-929335-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-929335 --entrypoint /usr/bin/test -v addons-929335:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d -d /var/lib: (2.024472652s)
	I0701 14:15:48.682238 3714493 oci.go:107] Successfully prepared a docker volume addons-929335
	I0701 14:15:48.682259 3714493 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0701 14:15:48.682278 3714493 kic.go:194] Starting extracting preloaded images to volume ...
	I0701 14:15:48.682364 3714493 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-929335:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d -I lz4 -xf /preloaded.tar -C /extractDir
	I0701 14:15:52.802232 3714493 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-929335:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d -I lz4 -xf /preloaded.tar -C /extractDir: (4.119827228s)
	I0701 14:15:52.802261 3714493 kic.go:203] duration metric: took 4.119980854s to extract preloaded images to volume ...
	W0701 14:15:52.802396 3714493 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0701 14:15:52.802531 3714493 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0701 14:15:52.858934 3714493 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-929335 --name addons-929335 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-929335 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-929335 --network addons-929335 --ip 192.168.49.2 --volume addons-929335:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d
	I0701 14:15:53.183631 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Running}}
	I0701 14:15:53.205298 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:15:53.229244 3714493 cli_runner.go:164] Run: docker exec addons-929335 stat /var/lib/dpkg/alternatives/iptables
	I0701 14:15:53.297257 3714493 oci.go:144] the created container "addons-929335" has a running status.
	I0701 14:15:53.297289 3714493 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa...
	I0701 14:15:53.580631 3714493 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0701 14:15:53.611570 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:15:53.637407 3714493 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0701 14:15:53.637425 3714493 kic_runner.go:114] Args: [docker exec --privileged addons-929335 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0701 14:15:53.724718 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:15:53.748074 3714493 machine.go:94] provisionDockerMachine start ...
	I0701 14:15:53.748176 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:15:53.783926 3714493 main.go:141] libmachine: Using SSH client type: native
	I0701 14:15:53.784204 3714493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I0701 14:15:53.784213 3714493 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 14:15:53.785004 3714493 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45302->127.0.0.1:33900: read: connection reset by peer
	I0701 14:15:56.924633 3714493 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-929335
	
	I0701 14:15:56.924658 3714493 ubuntu.go:169] provisioning hostname "addons-929335"
	I0701 14:15:56.924726 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:15:56.942045 3714493 main.go:141] libmachine: Using SSH client type: native
	I0701 14:15:56.942278 3714493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I0701 14:15:56.942292 3714493 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-929335 && echo "addons-929335" | sudo tee /etc/hostname
	I0701 14:15:57.096923 3714493 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-929335
	
	I0701 14:15:57.097050 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:15:57.112810 3714493 main.go:141] libmachine: Using SSH client type: native
	I0701 14:15:57.113273 3714493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I0701 14:15:57.113306 3714493 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-929335' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-929335/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-929335' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 14:15:57.253118 3714493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 14:15:57.253142 3714493 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19166-3708336/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-3708336/.minikube}
	I0701 14:15:57.253171 3714493 ubuntu.go:177] setting up certificates
	I0701 14:15:57.253181 3714493 provision.go:84] configureAuth start
	I0701 14:15:57.253247 3714493 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-929335
	I0701 14:15:57.269614 3714493 provision.go:143] copyHostCerts
	I0701 14:15:57.269693 3714493 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-3708336/.minikube/key.pem (1675 bytes)
	I0701 14:15:57.269810 3714493 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.pem (1082 bytes)
	I0701 14:15:57.269864 3714493 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-3708336/.minikube/cert.pem (1123 bytes)
	I0701 14:15:57.269907 3714493 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca-key.pem org=jenkins.addons-929335 san=[127.0.0.1 192.168.49.2 addons-929335 localhost minikube]
	I0701 14:15:57.445909 3714493 provision.go:177] copyRemoteCerts
	I0701 14:15:57.445977 3714493 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 14:15:57.446051 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:15:57.461828 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:15:57.558793 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0701 14:15:57.582147 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0701 14:15:57.606316 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 14:15:57.630034 3714493 provision.go:87] duration metric: took 376.838475ms to configureAuth
	I0701 14:15:57.630067 3714493 ubuntu.go:193] setting minikube options for container-runtime
	I0701 14:15:57.630255 3714493 config.go:182] Loaded profile config "addons-929335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:15:57.630363 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:15:57.647176 3714493 main.go:141] libmachine: Using SSH client type: native
	I0701 14:15:57.647417 3714493 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 33900 <nil> <nil>}
	I0701 14:15:57.647433 3714493 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0701 14:15:57.887076 3714493 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0701 14:15:57.887098 3714493 machine.go:97] duration metric: took 4.138997162s to provisionDockerMachine
	I0701 14:15:57.887108 3714493 client.go:171] duration metric: took 12.998770826s to LocalClient.Create
	I0701 14:15:57.887120 3714493 start.go:167] duration metric: took 12.998836648s to libmachine.API.Create "addons-929335"
	I0701 14:15:57.887128 3714493 start.go:293] postStartSetup for "addons-929335" (driver="docker")
	I0701 14:15:57.887139 3714493 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 14:15:57.887203 3714493 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 14:15:57.887243 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:15:57.904168 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:15:58.003397 3714493 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 14:15:58.007511 3714493 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 14:15:58.007550 3714493 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 14:15:58.007561 3714493 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 14:15:58.007571 3714493 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0701 14:15:58.007583 3714493 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-3708336/.minikube/addons for local assets ...
	I0701 14:15:58.007664 3714493 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-3708336/.minikube/files for local assets ...
	I0701 14:15:58.007690 3714493 start.go:296] duration metric: took 120.556462ms for postStartSetup
	I0701 14:15:58.008028 3714493 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-929335
	I0701 14:15:58.028022 3714493 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/config.json ...
	I0701 14:15:58.028321 3714493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 14:15:58.028373 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:15:58.044783 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:15:58.137813 3714493 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 14:15:58.142149 3714493 start.go:128] duration metric: took 13.256462134s to createHost
	I0701 14:15:58.142172 3714493 start.go:83] releasing machines lock for "addons-929335", held for 13.256621323s
	I0701 14:15:58.142244 3714493 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-929335
	I0701 14:15:58.158586 3714493 ssh_runner.go:195] Run: cat /version.json
	I0701 14:15:58.158649 3714493 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 14:15:58.158723 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:15:58.158650 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:15:58.182111 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:15:58.186181 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:15:58.401037 3714493 ssh_runner.go:195] Run: systemctl --version
	I0701 14:15:58.405032 3714493 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0701 14:15:58.547228 3714493 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 14:15:58.551572 3714493 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 14:15:58.573446 3714493 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0701 14:15:58.573521 3714493 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 14:15:58.608554 3714493 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0701 14:15:58.608579 3714493 start.go:494] detecting cgroup driver to use...
	I0701 14:15:58.608613 3714493 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0701 14:15:58.608664 3714493 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 14:15:58.625680 3714493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 14:15:58.637518 3714493 docker.go:217] disabling cri-docker service (if available) ...
	I0701 14:15:58.637632 3714493 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0701 14:15:58.653897 3714493 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0701 14:15:58.669193 3714493 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0701 14:15:58.758868 3714493 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0701 14:15:58.857813 3714493 docker.go:233] disabling docker service ...
	I0701 14:15:58.857883 3714493 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 14:15:58.878267 3714493 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 14:15:58.890657 3714493 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 14:15:58.981579 3714493 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 14:15:59.067544 3714493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 14:15:59.079292 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 14:15:59.095373 3714493 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0701 14:15:59.095482 3714493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:15:59.105296 3714493 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0701 14:15:59.105417 3714493 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:15:59.115131 3714493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:15:59.125192 3714493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:15:59.134917 3714493 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 14:15:59.143807 3714493 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:15:59.153513 3714493 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:15:59.169649 3714493 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:15:59.179480 3714493 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 14:15:59.188286 3714493 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 14:15:59.196798 3714493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 14:15:59.285945 3714493 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0701 14:15:59.399846 3714493 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0701 14:15:59.399979 3714493 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0701 14:15:59.403562 3714493 start.go:562] Will wait 60s for crictl version
	I0701 14:15:59.403670 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:15:59.407101 3714493 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 14:15:59.446266 3714493 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0701 14:15:59.446410 3714493 ssh_runner.go:195] Run: crio --version
	I0701 14:15:59.482013 3714493 ssh_runner.go:195] Run: crio --version
	I0701 14:15:59.520538 3714493 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.24.6 ...
	I0701 14:15:59.522688 3714493 cli_runner.go:164] Run: docker network inspect addons-929335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 14:15:59.540151 3714493 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0701 14:15:59.543936 3714493 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 14:15:59.554647 3714493 kubeadm.go:877] updating cluster {Name:addons-929335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-929335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0701 14:15:59.554777 3714493 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0701 14:15:59.554851 3714493 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 14:15:59.632151 3714493 crio.go:514] all images are preloaded for cri-o runtime.
	I0701 14:15:59.632172 3714493 crio.go:433] Images already preloaded, skipping extraction
	I0701 14:15:59.632229 3714493 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 14:15:59.670022 3714493 crio.go:514] all images are preloaded for cri-o runtime.
	I0701 14:15:59.670046 3714493 cache_images.go:84] Images are preloaded, skipping loading
	I0701 14:15:59.670055 3714493 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.2 crio true true} ...
	I0701 14:15:59.670151 3714493 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-929335 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-929335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 14:15:59.670241 3714493 ssh_runner.go:195] Run: crio config
	I0701 14:15:59.730366 3714493 cni.go:84] Creating CNI manager for ""
	I0701 14:15:59.730391 3714493 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0701 14:15:59.730404 3714493 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0701 14:15:59.730453 3714493 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-929335 NodeName:addons-929335 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0701 14:15:59.730622 3714493 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-929335"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 14:15:59.730697 3714493 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 14:15:59.739813 3714493 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 14:15:59.739905 3714493 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 14:15:59.748805 3714493 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0701 14:15:59.767433 3714493 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 14:15:59.785706 3714493 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0701 14:15:59.803590 3714493 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0701 14:15:59.807149 3714493 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 14:15:59.818038 3714493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 14:15:59.908866 3714493 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 14:15:59.922064 3714493 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335 for IP: 192.168.49.2
	I0701 14:15:59.922087 3714493 certs.go:194] generating shared ca certs ...
	I0701 14:15:59.922105 3714493 certs.go:226] acquiring lock for ca certs: {Name:mkef61a10d340f62d4856e4c226678a7bd970ee7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:15:59.922277 3714493 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.key
	I0701 14:16:00.634062 3714493 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt ...
	I0701 14:16:00.634098 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt: {Name:mk7cc0d70948e4ed02cc6b03bd67d2393f1761b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:00.634305 3714493 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.key ...
	I0701 14:16:00.634318 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.key: {Name:mk742c097069fed85f84c630a04fca6422948097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:00.634927 3714493 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.key
	I0701 14:16:00.912069 3714493 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.crt ...
	I0701 14:16:00.912101 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.crt: {Name:mk1111ab1a13b413b69bba0d83843e569a4ce1dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:00.912297 3714493 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.key ...
	I0701 14:16:00.912309 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.key: {Name:mk6c1055d7fb8e3ff12df62e15d12d33ae8610e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:00.912875 3714493 certs.go:256] generating profile certs ...
	I0701 14:16:00.912941 3714493 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.key
	I0701 14:16:00.912960 3714493 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt with IP's: []
	I0701 14:16:01.699178 3714493 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt ...
	I0701 14:16:01.699223 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: {Name:mk182fca4cc3e0c307e79f2cccfa26f18a3683d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:01.699418 3714493 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.key ...
	I0701 14:16:01.699432 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.key: {Name:mk5d3500d252311b100e5282c81fd294ecbf86e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:01.699520 3714493 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.key.37fc1b00
	I0701 14:16:01.699541 3714493 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.crt.37fc1b00 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0701 14:16:01.926771 3714493 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.crt.37fc1b00 ...
	I0701 14:16:01.926806 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.crt.37fc1b00: {Name:mk4c0a011c17e9899ed7224593d62a91b67e2f9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:01.927001 3714493 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.key.37fc1b00 ...
	I0701 14:16:01.927018 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.key.37fc1b00: {Name:mk3308ef7ff745f4831105113fa3d3c9402f03bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:01.927113 3714493 certs.go:381] copying /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.crt.37fc1b00 -> /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.crt
	I0701 14:16:01.927192 3714493 certs.go:385] copying /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.key.37fc1b00 -> /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.key
	I0701 14:16:01.927252 3714493 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/proxy-client.key
	I0701 14:16:01.927270 3714493 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/proxy-client.crt with IP's: []
	I0701 14:16:02.398156 3714493 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/proxy-client.crt ...
	I0701 14:16:02.398190 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/proxy-client.crt: {Name:mkd44519905e4e13028e4ac695d33cab5461d876 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:02.398849 3714493 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/proxy-client.key ...
	I0701 14:16:02.398871 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/proxy-client.key: {Name:mk215786bde510015bc46c4b221b63c4f8549acc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:02.399579 3714493 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 14:16:02.399625 3714493 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem (1082 bytes)
	I0701 14:16:02.399660 3714493 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem (1123 bytes)
	I0701 14:16:02.399694 3714493 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem (1675 bytes)
	I0701 14:16:02.400316 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 14:16:02.425817 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 14:16:02.450155 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 14:16:02.473291 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 14:16:02.496432 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0701 14:16:02.520341 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0701 14:16:02.544457 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 14:16:02.568333 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 14:16:02.592203 3714493 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 14:16:02.617084 3714493 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 14:16:02.634812 3714493 ssh_runner.go:195] Run: openssl version
	I0701 14:16:02.640385 3714493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 14:16:02.650103 3714493 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 14:16:02.653730 3714493 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 14:16 /usr/share/ca-certificates/minikubeCA.pem
	I0701 14:16:02.653797 3714493 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 14:16:02.660622 3714493 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 14:16:02.670406 3714493 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 14:16:02.673869 3714493 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0701 14:16:02.673917 3714493 kubeadm.go:391] StartCluster: {Name:addons-929335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-929335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 14:16:02.674002 3714493 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0701 14:16:02.674064 3714493 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 14:16:02.710117 3714493 cri.go:89] found id: ""
	I0701 14:16:02.710221 3714493 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0701 14:16:02.718858 3714493 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0701 14:16:02.727650 3714493 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0701 14:16:02.727761 3714493 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0701 14:16:02.736365 3714493 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0701 14:16:02.736386 3714493 kubeadm.go:156] found existing configuration files:
	
	I0701 14:16:02.736461 3714493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0701 14:16:02.745176 3714493 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0701 14:16:02.745306 3714493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0701 14:16:02.753657 3714493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0701 14:16:02.762104 3714493 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0701 14:16:02.762188 3714493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0701 14:16:02.770704 3714493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0701 14:16:02.779144 3714493 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0701 14:16:02.779247 3714493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0701 14:16:02.787321 3714493 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0701 14:16:02.795832 3714493 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0701 14:16:02.795918 3714493 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0701 14:16:02.803930 3714493 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0701 14:16:02.851829 3714493 kubeadm.go:309] [init] Using Kubernetes version: v1.30.2
	I0701 14:16:02.852056 3714493 kubeadm.go:309] [preflight] Running pre-flight checks
	I0701 14:16:02.910180 3714493 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0701 14:16:02.910294 3714493 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1063-aws
	I0701 14:16:02.910379 3714493 kubeadm.go:309] OS: Linux
	I0701 14:16:02.910445 3714493 kubeadm.go:309] CGROUPS_CPU: enabled
	I0701 14:16:02.910528 3714493 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0701 14:16:02.910592 3714493 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0701 14:16:02.910699 3714493 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0701 14:16:02.910770 3714493 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0701 14:16:02.910834 3714493 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0701 14:16:02.910883 3714493 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0701 14:16:02.910944 3714493 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0701 14:16:02.910996 3714493 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0701 14:16:02.978221 3714493 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0701 14:16:02.978409 3714493 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0701 14:16:02.978541 3714493 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0701 14:16:03.199067 3714493 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0701 14:16:03.202966 3714493 out.go:204]   - Generating certificates and keys ...
	I0701 14:16:03.203082 3714493 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0701 14:16:03.203164 3714493 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0701 14:16:03.668055 3714493 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0701 14:16:04.509348 3714493 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0701 14:16:05.464755 3714493 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0701 14:16:05.879208 3714493 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0701 14:16:06.586814 3714493 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0701 14:16:06.586967 3714493 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-929335 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0701 14:16:07.565364 3714493 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0701 14:16:07.565521 3714493 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-929335 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0701 14:16:07.718316 3714493 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0701 14:16:08.047750 3714493 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0701 14:16:08.571041 3714493 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0701 14:16:08.571352 3714493 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0701 14:16:08.749871 3714493 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0701 14:16:09.147640 3714493 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0701 14:16:09.543890 3714493 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0701 14:16:09.896370 3714493 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0701 14:16:10.361785 3714493 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0701 14:16:10.362685 3714493 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0701 14:16:10.365769 3714493 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0701 14:16:10.368028 3714493 out.go:204]   - Booting up control plane ...
	I0701 14:16:10.368160 3714493 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0701 14:16:10.368242 3714493 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0701 14:16:10.369305 3714493 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0701 14:16:10.386119 3714493 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0701 14:16:10.387516 3714493 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0701 14:16:10.387738 3714493 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0701 14:16:10.488045 3714493 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0701 14:16:10.488132 3714493 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0701 14:16:11.989612 3714493 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.501644867s
	I0701 14:16:11.989707 3714493 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0701 14:16:18.991274 3714493 kubeadm.go:309] [api-check] The API server is healthy after 7.001606639s
	I0701 14:16:19.011022 3714493 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0701 14:16:19.027312 3714493 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0701 14:16:19.083253 3714493 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0701 14:16:19.083446 3714493 kubeadm.go:309] [mark-control-plane] Marking the node addons-929335 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0701 14:16:19.101990 3714493 kubeadm.go:309] [bootstrap-token] Using token: ypqvhc.tyl4o7d4g0682pi9
	I0701 14:16:19.104330 3714493 out.go:204]   - Configuring RBAC rules ...
	I0701 14:16:19.104455 3714493 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0701 14:16:19.108412 3714493 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0701 14:16:19.121674 3714493 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0701 14:16:19.126021 3714493 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0701 14:16:19.130540 3714493 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0701 14:16:19.136155 3714493 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0701 14:16:19.397600 3714493 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0701 14:16:19.842548 3714493 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0701 14:16:20.402445 3714493 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0701 14:16:20.403731 3714493 kubeadm.go:309] 
	I0701 14:16:20.403804 3714493 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0701 14:16:20.403814 3714493 kubeadm.go:309] 
	I0701 14:16:20.403888 3714493 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0701 14:16:20.403897 3714493 kubeadm.go:309] 
	I0701 14:16:20.403922 3714493 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0701 14:16:20.403982 3714493 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0701 14:16:20.404035 3714493 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0701 14:16:20.404044 3714493 kubeadm.go:309] 
	I0701 14:16:20.404096 3714493 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0701 14:16:20.404104 3714493 kubeadm.go:309] 
	I0701 14:16:20.404150 3714493 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0701 14:16:20.404158 3714493 kubeadm.go:309] 
	I0701 14:16:20.404209 3714493 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0701 14:16:20.404284 3714493 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0701 14:16:20.404353 3714493 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0701 14:16:20.404362 3714493 kubeadm.go:309] 
	I0701 14:16:20.404443 3714493 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0701 14:16:20.404520 3714493 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0701 14:16:20.404529 3714493 kubeadm.go:309] 
	I0701 14:16:20.404609 3714493 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ypqvhc.tyl4o7d4g0682pi9 \
	I0701 14:16:20.404711 3714493 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:147605410e7daebab3b068442614c3748ab53b9f1af728ca2913c2913dc90190 \
	I0701 14:16:20.404735 3714493 kubeadm.go:309] 	--control-plane 
	I0701 14:16:20.404744 3714493 kubeadm.go:309] 
	I0701 14:16:20.404825 3714493 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0701 14:16:20.404833 3714493 kubeadm.go:309] 
	I0701 14:16:20.404935 3714493 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ypqvhc.tyl4o7d4g0682pi9 \
	I0701 14:16:20.405053 3714493 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:147605410e7daebab3b068442614c3748ab53b9f1af728ca2913c2913dc90190 
	I0701 14:16:20.408632 3714493 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1063-aws\n", err: exit status 1
	I0701 14:16:20.408751 3714493 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0701 14:16:20.408772 3714493 cni.go:84] Creating CNI manager for ""
	I0701 14:16:20.408783 3714493 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0701 14:16:20.410804 3714493 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0701 14:16:20.412471 3714493 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0701 14:16:20.416311 3714493 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0701 14:16:20.416332 3714493 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0701 14:16:20.437079 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0701 14:16:20.711020 3714493 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0701 14:16:20.711169 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:20.711267 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-929335 minikube.k8s.io/updated_at=2024_07_01T14_16_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c minikube.k8s.io/name=addons-929335 minikube.k8s.io/primary=true
	I0701 14:16:20.724072 3714493 ops.go:34] apiserver oom_adj: -16
	I0701 14:16:20.866671 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:21.367706 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:21.866828 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:22.366823 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:22.867571 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:23.367405 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:23.866798 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:24.367603 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:24.867298 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:25.366818 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:25.867502 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:26.367284 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:26.867540 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:27.367701 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:27.867733 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:28.366856 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:28.866903 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:29.367275 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:29.867547 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:30.367587 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:30.866995 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:31.367733 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:31.867506 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:32.367261 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:32.867677 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:33.367224 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:33.867549 3714493 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0701 14:16:34.017778 3714493 kubeadm.go:1107] duration metric: took 13.306658129s to wait for elevateKubeSystemPrivileges
	W0701 14:16:34.017812 3714493 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0701 14:16:34.017820 3714493 kubeadm.go:393] duration metric: took 31.343907492s to StartCluster
	I0701 14:16:34.017837 3714493 settings.go:142] acquiring lock: {Name:mke9008d6920f4be65eddeda5d60c738ed3823ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:34.017959 3714493 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19166-3708336/kubeconfig
	I0701 14:16:34.018393 3714493 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/kubeconfig: {Name:mk4d5838a81c57a1d9ec9a509328664588dd34aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:16:34.018605 3714493 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0701 14:16:34.018715 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0701 14:16:34.018992 3714493 config.go:182] Loaded profile config "addons-929335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:16:34.019024 3714493 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0701 14:16:34.019110 3714493 addons.go:69] Setting yakd=true in profile "addons-929335"
	I0701 14:16:34.019132 3714493 addons.go:234] Setting addon yakd=true in "addons-929335"
	I0701 14:16:34.019156 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.019627 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.020110 3714493 addons.go:69] Setting metrics-server=true in profile "addons-929335"
	I0701 14:16:34.020142 3714493 addons.go:234] Setting addon metrics-server=true in "addons-929335"
	I0701 14:16:34.020177 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.020618 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.020780 3714493 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-929335"
	I0701 14:16:34.020808 3714493 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-929335"
	I0701 14:16:34.020834 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.021383 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.021779 3714493 addons.go:69] Setting cloud-spanner=true in profile "addons-929335"
	I0701 14:16:34.021811 3714493 addons.go:234] Setting addon cloud-spanner=true in "addons-929335"
	I0701 14:16:34.021839 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.022440 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.025353 3714493 addons.go:69] Setting registry=true in profile "addons-929335"
	I0701 14:16:34.025396 3714493 addons.go:234] Setting addon registry=true in "addons-929335"
	I0701 14:16:34.025430 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.025874 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.026037 3714493 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-929335"
	I0701 14:16:34.026084 3714493 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-929335"
	I0701 14:16:34.026106 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.026494 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.045571 3714493 addons.go:69] Setting default-storageclass=true in profile "addons-929335"
	I0701 14:16:34.045635 3714493 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-929335"
	I0701 14:16:34.045827 3714493 addons.go:69] Setting storage-provisioner=true in profile "addons-929335"
	I0701 14:16:34.045852 3714493 addons.go:234] Setting addon storage-provisioner=true in "addons-929335"
	I0701 14:16:34.045887 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.046377 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.047538 3714493 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-929335"
	I0701 14:16:34.047635 3714493 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-929335"
	I0701 14:16:34.048754 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.065900 3714493 addons.go:69] Setting volcano=true in profile "addons-929335"
	I0701 14:16:34.066023 3714493 addons.go:234] Setting addon volcano=true in "addons-929335"
	I0701 14:16:34.066100 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.066675 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.046588 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.046596 3714493 addons.go:69] Setting gcp-auth=true in profile "addons-929335"
	I0701 14:16:34.079946 3714493 mustload.go:65] Loading cluster: addons-929335
	I0701 14:16:34.080178 3714493 config.go:182] Loaded profile config "addons-929335": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:16:34.046603 3714493 addons.go:69] Setting ingress=true in profile "addons-929335"
	I0701 14:16:34.092598 3714493 addons.go:234] Setting addon ingress=true in "addons-929335"
	I0701 14:16:34.097387 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.046607 3714493 addons.go:69] Setting ingress-dns=true in profile "addons-929335"
	I0701 14:16:34.046613 3714493 addons.go:69] Setting inspektor-gadget=true in profile "addons-929335"
	I0701 14:16:34.046680 3714493 out.go:177] * Verifying Kubernetes components...
	I0701 14:16:34.092517 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.092557 3714493 addons.go:69] Setting volumesnapshots=true in profile "addons-929335"
	I0701 14:16:34.101619 3714493 addons.go:234] Setting addon ingress-dns=true in "addons-929335"
	I0701 14:16:34.101719 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.111422 3714493 addons.go:234] Setting addon inspektor-gadget=true in "addons-929335"
	I0701 14:16:34.113267 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.113889 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.134020 3714493 addons.go:234] Setting addon volumesnapshots=true in "addons-929335"
	I0701 14:16:34.134121 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.134668 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.150443 3714493 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 14:16:34.156218 3714493 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0701 14:16:34.166506 3714493 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0701 14:16:34.166576 3714493 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0701 14:16:34.166691 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.177618 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.178279 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.217350 3714493 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0701 14:16:34.224625 3714493 out.go:177]   - Using image docker.io/registry:2.8.3
	I0701 14:16:34.226158 3714493 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0701 14:16:34.226323 3714493 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0701 14:16:34.226469 3714493 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0701 14:16:34.229994 3714493 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0701 14:16:34.230064 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0701 14:16:34.230160 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.247694 3714493 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0701 14:16:34.247757 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0701 14:16:34.247864 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.247989 3714493 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0701 14:16:34.248227 3714493 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0701 14:16:34.248239 3714493 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0701 14:16:34.248291 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.282251 3714493 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-929335"
	I0701 14:16:34.282341 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.282849 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.298698 3714493 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0701 14:16:34.299021 3714493 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0701 14:16:34.299039 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0701 14:16:34.299107 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.342239 3714493 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 14:16:34.344514 3714493 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 14:16:34.344539 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 14:16:34.344607 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	W0701 14:16:34.351627 3714493 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0701 14:16:34.353410 3714493 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0701 14:16:34.355393 3714493 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0701 14:16:34.355591 3714493 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0701 14:16:34.356120 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.361887 3714493 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0701 14:16:34.361905 3714493 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0701 14:16:34.361969 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.371335 3714493 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0701 14:16:34.373082 3714493 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0701 14:16:34.375153 3714493 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0701 14:16:34.376917 3714493 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0701 14:16:34.377987 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0701 14:16:34.385067 3714493 addons.go:234] Setting addon default-storageclass=true in "addons-929335"
	I0701 14:16:34.385110 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:34.385532 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:34.385732 3714493 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0701 14:16:34.385747 3714493 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0701 14:16:34.385801 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.400303 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.401154 3714493 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0701 14:16:34.404862 3714493 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0701 14:16:34.404930 3714493 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0701 14:16:34.405087 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.459254 3714493 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0701 14:16:34.461277 3714493 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0701 14:16:34.464649 3714493 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0701 14:16:34.467502 3714493 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0701 14:16:34.467678 3714493 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0701 14:16:34.467697 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0701 14:16:34.467765 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.472341 3714493 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0701 14:16:34.472363 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0701 14:16:34.472432 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.490919 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.510694 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.510816 3714493 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 14:16:34.522962 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.526792 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.569057 3714493 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0701 14:16:34.571412 3714493 out.go:177]   - Using image docker.io/busybox:stable
	I0701 14:16:34.576762 3714493 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0701 14:16:34.576781 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0701 14:16:34.576850 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.608784 3714493 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 14:16:34.608845 3714493 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 14:16:34.608938 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:34.619735 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.620166 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.624184 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.628534 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.644585 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.650554 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.674245 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.682288 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:34.880571 3714493 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0701 14:16:34.880595 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0701 14:16:34.906530 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0701 14:16:34.945534 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0701 14:16:35.049105 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0701 14:16:35.062378 3714493 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0701 14:16:35.062448 3714493 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0701 14:16:35.065120 3714493 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0701 14:16:35.065181 3714493 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0701 14:16:35.068346 3714493 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0701 14:16:35.068416 3714493 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0701 14:16:35.073416 3714493 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0701 14:16:35.073509 3714493 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0701 14:16:35.102107 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 14:16:35.107455 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0701 14:16:35.120817 3714493 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0701 14:16:35.120886 3714493 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0701 14:16:35.128086 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0701 14:16:35.194687 3714493 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0701 14:16:35.194758 3714493 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0701 14:16:35.198166 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 14:16:35.215393 3714493 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0701 14:16:35.215506 3714493 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0701 14:16:35.223034 3714493 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 14:16:35.223117 3714493 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0701 14:16:35.259603 3714493 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0701 14:16:35.259676 3714493 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0701 14:16:35.275817 3714493 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0701 14:16:35.276165 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0701 14:16:35.326307 3714493 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0701 14:16:35.326379 3714493 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0701 14:16:35.369462 3714493 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0701 14:16:35.369541 3714493 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0701 14:16:35.388923 3714493 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0701 14:16:35.389004 3714493 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0701 14:16:35.405282 3714493 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0701 14:16:35.405355 3714493 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0701 14:16:35.422319 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0701 14:16:35.447016 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 14:16:35.493806 3714493 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0701 14:16:35.493891 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0701 14:16:35.500764 3714493 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0701 14:16:35.500828 3714493 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0701 14:16:35.527822 3714493 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0701 14:16:35.527898 3714493 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0701 14:16:35.531613 3714493 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0701 14:16:35.531676 3714493 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0701 14:16:35.585433 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0701 14:16:35.635054 3714493 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0701 14:16:35.635124 3714493 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0701 14:16:35.640424 3714493 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0701 14:16:35.640494 3714493 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0701 14:16:35.699546 3714493 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0701 14:16:35.699624 3714493 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0701 14:16:35.789406 3714493 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0701 14:16:35.789468 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0701 14:16:35.847372 3714493 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0701 14:16:35.847445 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0701 14:16:35.857297 3714493 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0701 14:16:35.857359 3714493 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0701 14:16:35.930577 3714493 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0701 14:16:35.930650 3714493 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0701 14:16:35.959377 3714493 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0701 14:16:35.959451 3714493 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0701 14:16:36.017769 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0701 14:16:36.074533 3714493 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0701 14:16:36.074602 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0701 14:16:36.090570 3714493 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0701 14:16:36.090677 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0701 14:16:36.193282 3714493 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0701 14:16:36.193353 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0701 14:16:36.216757 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0701 14:16:36.290921 3714493 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0701 14:16:36.291000 3714493 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0701 14:16:36.311402 3714493 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.933386242s)
	I0701 14:16:36.311516 3714493 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0701 14:16:36.311486 3714493 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.800650825s)
	I0701 14:16:36.312409 3714493 node_ready.go:35] waiting up to 6m0s for node "addons-929335" to be "Ready" ...
	I0701 14:16:36.394770 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0701 14:16:37.883303 3714493 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-929335" context rescaled to 1 replicas
	I0701 14:16:38.413681 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.507112665s)
	I0701 14:16:38.413784 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.468224774s)
	I0701 14:16:38.481246 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:16:38.762505 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.713308551s)
	I0701 14:16:39.225078 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.12288642s)
	I0701 14:16:39.233206 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.125664976s)
	I0701 14:16:40.070713 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.942553861s)
	I0701 14:16:40.070800 3714493 addons.go:475] Verifying addon ingress=true in "addons-929335"
	I0701 14:16:40.071041 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.872807842s)
	I0701 14:16:40.071283 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.648889468s)
	I0701 14:16:40.071486 3714493 addons.go:475] Verifying addon registry=true in "addons-929335"
	I0701 14:16:40.071351 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.624259143s)
	I0701 14:16:40.071656 3714493 addons.go:475] Verifying addon metrics-server=true in "addons-929335"
	I0701 14:16:40.071384 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.48587525s)
	I0701 14:16:40.073146 3714493 out.go:177] * Verifying ingress addon...
	I0701 14:16:40.073246 3714493 out.go:177] * Verifying registry addon...
	I0701 14:16:40.074988 3714493 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-929335 service yakd-dashboard -n yakd-dashboard
	
	I0701 14:16:40.075864 3714493 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0701 14:16:40.076826 3714493 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0701 14:16:40.114717 3714493 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0701 14:16:40.114819 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:40.119951 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.102065283s)
	W0701 14:16:40.120011 3714493 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0701 14:16:40.120038 3714493 retry.go:31] will retry after 205.150726ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0701 14:16:40.120111 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.903280624s)
	I0701 14:16:40.121342 3714493 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0701 14:16:40.121358 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:40.325909 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0701 14:16:40.580377 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.185514377s)
	I0701 14:16:40.580478 3714493 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-929335"
	I0701 14:16:40.583735 3714493 out.go:177] * Verifying csi-hostpath-driver addon...
	I0701 14:16:40.586713 3714493 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0701 14:16:40.594882 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:40.609140 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:40.613911 3714493 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0701 14:16:40.613980 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:40.816367 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:16:41.080335 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:41.082034 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:41.091835 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:41.565486 3714493 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0701 14:16:41.565570 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:41.590563 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:41.601562 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:41.611170 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:41.611920 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:41.805436 3714493 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0701 14:16:41.841584 3714493 addons.go:234] Setting addon gcp-auth=true in "addons-929335"
	I0701 14:16:41.841685 3714493 host.go:66] Checking if "addons-929335" exists ...
	I0701 14:16:41.842239 3714493 cli_runner.go:164] Run: docker container inspect addons-929335 --format={{.State.Status}}
	I0701 14:16:41.861355 3714493 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0701 14:16:41.861413 3714493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-929335
	I0701 14:16:41.880323 3714493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33900 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/addons-929335/id_rsa Username:docker}
	I0701 14:16:42.087796 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:42.090774 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:42.097128 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:42.581050 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:42.581421 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:42.591739 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:42.818588 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:16:43.084361 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:43.085481 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:43.097287 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:43.258329 3714493 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.932315114s)
	I0701 14:16:43.258502 3714493 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.39711314s)
	I0701 14:16:43.260477 3714493 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0701 14:16:43.262235 3714493 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0701 14:16:43.263843 3714493 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0701 14:16:43.263898 3714493 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0701 14:16:43.289146 3714493 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0701 14:16:43.289215 3714493 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0701 14:16:43.308260 3714493 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0701 14:16:43.308331 3714493 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0701 14:16:43.332622 3714493 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0701 14:16:43.581225 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:43.583473 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:43.591619 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:44.112734 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:44.113967 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:44.137665 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:44.234862 3714493 addons.go:475] Verifying addon gcp-auth=true in "addons-929335"
	I0701 14:16:44.236796 3714493 out.go:177] * Verifying gcp-auth addon...
	I0701 14:16:44.239981 3714493 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0701 14:16:44.252868 3714493 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0701 14:16:44.252943 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:44.581786 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:44.583186 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:44.591571 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:44.744040 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:45.091760 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:45.092280 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:45.096570 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:45.244845 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:45.316121 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:16:45.580977 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:45.583462 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:45.592691 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:45.743455 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:46.081399 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:46.081769 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:46.097919 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:46.244357 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:46.581409 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:46.582440 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:46.591477 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:46.743506 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:47.082516 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:47.083590 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:47.091334 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:47.244489 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:47.579819 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:47.581891 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:47.590923 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:47.743710 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:47.816128 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:16:48.081822 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:48.082306 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:48.092660 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:48.244637 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:48.580657 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:48.581493 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:48.591772 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:48.744400 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:49.079804 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:49.081694 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:49.092515 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:49.243623 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:49.580958 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:49.581329 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:49.591195 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:49.744011 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:49.816434 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:16:50.082335 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:50.083139 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:50.091619 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:50.243526 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:50.581336 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:50.582087 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:50.592499 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:50.744123 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:51.082085 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:51.082565 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:51.091484 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:51.244335 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:51.580931 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:51.582249 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:51.591555 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:51.743315 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:52.080722 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:52.081149 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:52.091088 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:52.244361 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:52.316143 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:16:52.580180 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:52.580590 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:52.591134 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:52.743929 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:53.081743 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:53.082502 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:53.092003 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:53.244465 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:53.581198 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:53.581366 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:53.591498 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:53.743784 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:54.080401 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:54.080788 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:54.090966 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:54.244344 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:54.316346 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:16:54.579922 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:54.580626 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:54.591119 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:54.744014 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:55.080124 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:55.082492 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:55.092937 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:55.244606 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:55.581524 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:55.581905 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:55.591526 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:55.743944 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:56.080659 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:56.081736 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:56.091158 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:56.244323 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:56.580509 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:56.580824 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:56.591465 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:56.743982 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:56.816430 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:16:57.080545 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:57.080997 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:57.091857 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:57.243711 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:57.580859 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:57.581093 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:57.590824 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:57.744460 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:58.080569 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:58.082411 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:58.091207 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:58.245509 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:58.580574 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:58.581893 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:58.591628 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:58.744093 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:58.816533 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:16:59.080818 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:59.081423 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:59.091411 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:59.243970 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:16:59.581367 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:16:59.582123 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:16:59.591614 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:16:59.743472 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:00.105087 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:00.105319 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:00.114476 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:00.244951 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:00.581487 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:00.582108 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:00.591241 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:00.744250 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:01.081456 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:01.082717 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:01.090945 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:01.243772 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:01.315836 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:17:01.580993 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:01.582205 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:01.591016 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:01.744216 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:02.082352 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:02.083194 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:02.091988 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:02.245410 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:02.580393 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:02.582327 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:02.591133 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:02.744415 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:03.081584 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:03.083205 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:03.091034 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:03.244147 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:03.316176 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:17:03.581041 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:03.581899 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:03.591547 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:03.744129 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:04.086859 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:04.089289 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:04.093086 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:04.243977 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:04.581989 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:04.582827 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:04.591736 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:04.744259 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:05.081576 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:05.082352 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:05.094714 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:05.244684 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:05.317062 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:17:05.581387 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:05.582634 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:05.592027 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:05.743753 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:06.081280 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:06.082063 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:06.091294 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:06.244522 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:06.580817 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:06.581584 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:06.591150 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:06.745955 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:07.080430 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:07.082352 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:07.091494 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:07.243868 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:07.579829 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:07.581908 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:07.591673 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:07.743653 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:07.815919 3714493 node_ready.go:53] node "addons-929335" has status "Ready":"False"
	I0701 14:17:08.121096 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:08.121891 3714493 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0701 14:17:08.121908 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:08.132177 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:08.249372 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:08.342301 3714493 node_ready.go:49] node "addons-929335" has status "Ready":"True"
	I0701 14:17:08.342330 3714493 node_ready.go:38] duration metric: took 32.029893573s for node "addons-929335" to be "Ready" ...
	I0701 14:17:08.342341 3714493 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 14:17:08.407878 3714493 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-s8jw9" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:08.584824 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:08.585208 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:08.591823 3714493 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0701 14:17:08.591892 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:08.747802 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:09.096665 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:09.097434 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:09.104098 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:09.243844 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:09.582201 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:09.586042 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:09.593329 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:09.743661 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:10.099148 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:10.111876 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:10.113382 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:10.244252 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:10.414173 3714493 pod_ready.go:102] pod "coredns-7db6d8ff4d-s8jw9" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:10.588255 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:10.589913 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:10.594007 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:10.745176 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:11.084905 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:11.089257 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:11.103084 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:11.244429 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:11.414405 3714493 pod_ready.go:92] pod "coredns-7db6d8ff4d-s8jw9" in "kube-system" namespace has status "Ready":"True"
	I0701 14:17:11.414474 3714493 pod_ready.go:81] duration metric: took 3.006514273s for pod "coredns-7db6d8ff4d-s8jw9" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.414514 3714493 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-929335" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.428148 3714493 pod_ready.go:92] pod "etcd-addons-929335" in "kube-system" namespace has status "Ready":"True"
	I0701 14:17:11.428217 3714493 pod_ready.go:81] duration metric: took 13.671197ms for pod "etcd-addons-929335" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.428246 3714493 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-929335" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.433167 3714493 pod_ready.go:92] pod "kube-apiserver-addons-929335" in "kube-system" namespace has status "Ready":"True"
	I0701 14:17:11.433237 3714493 pod_ready.go:81] duration metric: took 4.969126ms for pod "kube-apiserver-addons-929335" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.433263 3714493 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-929335" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.446230 3714493 pod_ready.go:92] pod "kube-controller-manager-addons-929335" in "kube-system" namespace has status "Ready":"True"
	I0701 14:17:11.446302 3714493 pod_ready.go:81] duration metric: took 13.017643ms for pod "kube-controller-manager-addons-929335" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.446333 3714493 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b7sh5" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.459641 3714493 pod_ready.go:92] pod "kube-proxy-b7sh5" in "kube-system" namespace has status "Ready":"True"
	I0701 14:17:11.459722 3714493 pod_ready.go:81] duration metric: took 13.367964ms for pod "kube-proxy-b7sh5" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.459749 3714493 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-929335" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.585639 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:11.591679 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:11.616799 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:11.747435 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:11.812343 3714493 pod_ready.go:92] pod "kube-scheduler-addons-929335" in "kube-system" namespace has status "Ready":"True"
	I0701 14:17:11.812369 3714493 pod_ready.go:81] duration metric: took 352.598979ms for pod "kube-scheduler-addons-929335" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:11.812389 3714493 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace to be "Ready" ...
	I0701 14:17:12.086056 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:12.087355 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:12.094836 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:12.243945 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:12.583836 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:12.594583 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:12.603588 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:12.743704 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:13.084219 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:13.098737 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:13.100774 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:13.244223 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:13.583124 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:13.583751 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:13.592603 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:13.744082 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:13.824263 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:14.084580 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:14.086741 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:14.093056 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:14.244983 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:14.582155 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:14.582696 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:14.592536 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:14.743665 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:15.081735 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:15.082950 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:15.094604 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:15.245086 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:15.584316 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:15.586002 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:15.592428 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:15.747106 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:16.084605 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:16.085955 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:16.093500 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:16.245783 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:16.320486 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:16.595919 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:16.596344 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:16.605544 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:16.744506 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:17.082304 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:17.082744 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:17.093918 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:17.243917 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:17.581963 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:17.582581 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:17.592845 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:17.743059 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:18.082351 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:18.083249 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:18.093350 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:18.244265 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:18.581308 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:18.586706 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:18.593641 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:18.744406 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:18.818252 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:19.084509 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:19.086426 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:19.093603 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:19.247175 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:19.579923 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:19.582723 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:19.592470 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:19.752591 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:20.082363 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:20.083528 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:20.092461 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:20.243746 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:20.580418 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:20.582399 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:20.591564 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:20.744811 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:20.818777 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:21.089122 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:21.092473 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:21.104228 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:21.250201 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:21.585518 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:21.589157 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:21.595584 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:21.744853 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:22.081055 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:22.085304 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:22.097664 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:22.246094 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:22.582647 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:22.590030 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:22.596549 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:22.744462 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:22.819648 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:23.084611 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:23.086186 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:23.092796 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:23.244775 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:23.584771 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:23.586538 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:23.593752 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:23.744345 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:24.083918 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:24.086065 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:24.100601 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:24.244313 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:24.585008 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:24.588076 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:24.603405 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:24.743971 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:25.084738 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:25.089390 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:25.108787 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:25.249773 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:25.320530 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:25.582275 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:25.582948 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:25.593324 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:25.743834 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:26.081822 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:26.083483 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:26.091958 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:26.246513 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:26.581070 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:26.584358 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:26.592073 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:26.744060 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:27.083933 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:27.087574 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:27.098038 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:27.244736 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:27.326496 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:27.584147 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:27.585761 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:27.595737 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:27.745504 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:28.080450 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:28.083180 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:28.093785 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:28.252622 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:28.581367 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:28.584959 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:28.592453 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:28.743215 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:29.083390 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:29.084203 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:29.093746 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:29.243930 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:29.582119 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:29.583080 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:29.592096 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:29.745654 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:29.818848 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:30.080917 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:30.083572 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:30.092522 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:30.268887 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:30.585177 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:30.586384 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:30.594151 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:30.744094 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:31.082249 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:31.083768 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:31.094398 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:31.247133 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:31.580461 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:31.581652 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:31.593768 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:31.744234 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:31.820932 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:32.084070 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:32.085545 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:32.101092 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:32.254086 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:32.582906 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:32.592488 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:32.600965 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:32.744548 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:33.081519 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:33.086743 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:33.115393 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:33.274825 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:33.609454 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:33.622999 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:33.631870 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:33.744724 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:33.823173 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:34.088408 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:34.095624 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:34.100334 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:34.244279 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:34.581965 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:34.583004 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:34.592727 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:34.744535 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:35.083196 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:35.084170 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:35.094791 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:35.245967 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:35.581436 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:35.581628 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:35.592871 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:35.743886 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:36.083365 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:36.084925 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:36.093416 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:36.244676 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:36.320502 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:36.583626 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:36.584210 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:36.592820 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:36.747418 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:37.080565 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:37.081780 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:37.092547 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:37.244560 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:37.582541 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:37.583847 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:37.592458 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:37.743742 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:38.081799 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:38.083404 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:38.093611 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:38.243966 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:38.582447 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:38.583327 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:38.592080 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:38.744494 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:38.819616 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:39.083755 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:39.085354 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:39.092742 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:39.258682 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:39.588577 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:39.590657 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:39.599236 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:39.744540 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:40.082499 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:40.085942 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:40.092602 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:40.245146 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:40.586288 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:40.588438 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:40.593104 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:40.743713 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:40.819721 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:41.086088 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:41.090200 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:41.100932 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:41.244962 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:41.586086 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:41.587090 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:41.598139 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:41.744196 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:42.095449 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:42.096226 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:42.116461 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:42.244514 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:42.583737 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:42.585299 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:42.600036 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:42.744055 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:42.826791 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:43.083077 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:43.084537 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:43.092305 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:43.244242 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:43.580998 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:43.585488 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:43.592383 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:43.743836 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:44.082834 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:44.085650 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:44.096622 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:44.250671 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:44.584404 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:44.584838 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:44.593505 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:44.757588 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:44.833972 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:45.081380 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:45.083888 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:45.096816 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:45.248338 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:45.581977 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:45.582421 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:45.593981 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:45.743497 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:46.082952 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:46.084384 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:46.093943 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:46.244309 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:46.594520 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:46.596581 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:46.606991 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:46.744674 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:47.089882 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:47.091515 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:47.104966 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:47.246405 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:47.320564 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:47.586580 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:47.587992 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:47.604259 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:47.743954 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:48.084946 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:48.086374 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:48.097192 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:48.245240 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:48.584910 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:48.591451 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:48.596175 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:48.750309 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:49.083471 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:49.091054 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:49.097179 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:49.245629 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:49.321272 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:49.583149 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:49.586437 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:49.592746 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:49.744535 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:50.083448 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0701 14:17:50.085346 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:50.093173 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:50.244371 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:50.581890 3714493 kapi.go:107] duration metric: took 1m10.505061287s to wait for kubernetes.io/minikube-addons=registry ...
	I0701 14:17:50.582832 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:50.593459 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:50.744140 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:51.081107 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:51.093390 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:51.243966 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:51.582414 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:51.596552 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:51.746017 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:51.820171 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:52.081575 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:52.093544 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:52.249040 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:52.582441 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:52.594112 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:52.744184 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:53.082226 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:53.093338 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:53.245050 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:53.580913 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:53.592755 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:53.746826 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:53.820702 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:54.082705 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:54.093276 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:54.244633 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:54.583747 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:54.595896 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:54.743583 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:55.080939 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:55.097199 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:55.243998 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:55.581310 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:55.593573 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:55.744617 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:56.081311 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:56.094378 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:56.243865 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:56.324735 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:56.592198 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:56.601905 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:56.743567 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:57.080762 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:57.092818 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:57.248322 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:57.581259 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:57.608831 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:57.743883 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:58.081617 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:58.094270 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:58.244507 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:58.581300 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:58.595031 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:58.743694 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:58.819122 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:17:59.081311 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:59.134720 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:59.258056 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:17:59.580976 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:17:59.600737 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:17:59.744966 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:00.111941 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:18:00.115667 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:00.255042 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:00.580944 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:18:00.593422 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:00.744199 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:00.819705 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:01.081335 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:18:01.094196 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:01.244659 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:01.582206 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:18:01.608169 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:01.744741 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:02.080555 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:18:02.093951 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:02.243992 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:02.582627 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:18:02.596738 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:02.745300 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:03.082919 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:18:03.095360 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:03.244087 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:03.322640 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:03.580032 3714493 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0701 14:18:03.592316 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:03.743526 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:04.081594 3714493 kapi.go:107] duration metric: took 1m24.005729921s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0701 14:18:04.092100 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:04.245297 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:04.593062 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:04.743777 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:05.102037 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:05.244310 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:05.324952 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:05.593130 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:05.755127 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:06.093794 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:06.244871 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:06.593080 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:06.744220 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:07.092525 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:07.244322 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:07.592808 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:07.743942 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:07.818673 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:08.093951 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:08.243751 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:08.592755 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:08.744811 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:09.093112 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:09.245337 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:09.597137 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:09.744422 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:09.819101 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:10.093387 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:10.244745 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:10.592596 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:10.746558 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:11.092749 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:11.251166 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:11.594323 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:11.746441 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:11.824825 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:12.096102 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:12.247897 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:12.592138 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0701 14:18:12.744277 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:13.092556 3714493 kapi.go:107] duration metric: took 1m32.505840958s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0701 14:18:13.244077 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:13.745512 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:13.842973 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:14.244527 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:14.744960 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:15.244502 3714493 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0701 14:18:15.764548 3714493 kapi.go:107] duration metric: took 1m31.524566461s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0701 14:18:15.766936 3714493 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-929335 cluster.
	I0701 14:18:15.768964 3714493 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0701 14:18:15.770854 3714493 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0701 14:18:15.772686 3714493 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, inspektor-gadget, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0701 14:18:15.774607 3714493 addons.go:510] duration metric: took 1m41.755576625s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner storage-provisioner-rancher metrics-server yakd inspektor-gadget default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0701 14:18:16.318256 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:18.318365 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:20.818435 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:23.319002 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:25.819058 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:28.319549 3714493 pod_ready.go:102] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"False"
	I0701 14:18:28.818929 3714493 pod_ready.go:92] pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace has status "Ready":"True"
	I0701 14:18:28.818960 3714493 pod_ready.go:81] duration metric: took 1m17.006561887s for pod "metrics-server-c59844bb4-7ddxq" in "kube-system" namespace to be "Ready" ...
	I0701 14:18:28.818972 3714493 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-ssxlb" in "kube-system" namespace to be "Ready" ...
	I0701 14:18:28.824206 3714493 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-ssxlb" in "kube-system" namespace has status "Ready":"True"
	I0701 14:18:28.824276 3714493 pod_ready.go:81] duration metric: took 5.29493ms for pod "nvidia-device-plugin-daemonset-ssxlb" in "kube-system" namespace to be "Ready" ...
	I0701 14:18:28.824322 3714493 pod_ready.go:38] duration metric: took 1m20.48196799s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 14:18:28.824355 3714493 api_server.go:52] waiting for apiserver process to appear ...
	I0701 14:18:28.824385 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0701 14:18:28.824460 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 14:18:28.878403 3714493 cri.go:89] found id: "a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657"
	I0701 14:18:28.878422 3714493 cri.go:89] found id: ""
	I0701 14:18:28.878430 3714493 logs.go:276] 1 containers: [a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657]
	I0701 14:18:28.878485 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:28.882652 3714493 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0701 14:18:28.882726 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 14:18:28.921166 3714493 cri.go:89] found id: "a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025"
	I0701 14:18:28.921189 3714493 cri.go:89] found id: ""
	I0701 14:18:28.921197 3714493 logs.go:276] 1 containers: [a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025]
	I0701 14:18:28.921269 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:28.924643 3714493 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0701 14:18:28.924731 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 14:18:28.963683 3714493 cri.go:89] found id: "c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a"
	I0701 14:18:28.963711 3714493 cri.go:89] found id: ""
	I0701 14:18:28.963720 3714493 logs.go:276] 1 containers: [c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a]
	I0701 14:18:28.963775 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:28.967164 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0701 14:18:28.967252 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 14:18:29.007531 3714493 cri.go:89] found id: "f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8"
	I0701 14:18:29.007556 3714493 cri.go:89] found id: ""
	I0701 14:18:29.007564 3714493 logs.go:276] 1 containers: [f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8]
	I0701 14:18:29.007629 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:29.011292 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0701 14:18:29.011365 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 14:18:29.052545 3714493 cri.go:89] found id: "dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba"
	I0701 14:18:29.052569 3714493 cri.go:89] found id: ""
	I0701 14:18:29.052577 3714493 logs.go:276] 1 containers: [dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba]
	I0701 14:18:29.052635 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:29.056445 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 14:18:29.056519 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 14:18:29.095182 3714493 cri.go:89] found id: "646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393"
	I0701 14:18:29.095204 3714493 cri.go:89] found id: ""
	I0701 14:18:29.095212 3714493 logs.go:276] 1 containers: [646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393]
	I0701 14:18:29.095270 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:29.098928 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0701 14:18:29.099008 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0701 14:18:29.137707 3714493 cri.go:89] found id: "db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3"
	I0701 14:18:29.137731 3714493 cri.go:89] found id: ""
	I0701 14:18:29.137739 3714493 logs.go:276] 1 containers: [db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3]
	I0701 14:18:29.137794 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:29.141319 3714493 logs.go:123] Gathering logs for kube-controller-manager [646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393] ...
	I0701 14:18:29.141345 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393"
	I0701 14:18:29.223402 3714493 logs.go:123] Gathering logs for CRI-O ...
	I0701 14:18:29.223439 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0701 14:18:29.318656 3714493 logs.go:123] Gathering logs for container status ...
	I0701 14:18:29.318689 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 14:18:29.380137 3714493 logs.go:123] Gathering logs for kubelet ...
	I0701 14:18:29.380172 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 14:18:29.432200 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: W0701 14:16:33.791186    1552 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:29.432422 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: E0701 14:16:33.791243    1552 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:29.432955 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: W0701 14:16:33.815611    1552 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:29.433163 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: E0701 14:16:33.815653    1552 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:29.444763 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.068671    1552 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:29.444988 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.068720    1552 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:29.445463 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.071459    1552 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:29.445656 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.071498    1552 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:29.445821 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.077204    1552 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:29.446005 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.077471    1552 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	I0701 14:18:29.490072 3714493 logs.go:123] Gathering logs for dmesg ...
	I0701 14:18:29.490111 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 14:18:29.509840 3714493 logs.go:123] Gathering logs for kube-apiserver [a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657] ...
	I0701 14:18:29.509870 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657"
	I0701 14:18:29.573738 3714493 logs.go:123] Gathering logs for kube-scheduler [f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8] ...
	I0701 14:18:29.573775 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8"
	I0701 14:18:29.620326 3714493 logs.go:123] Gathering logs for kube-proxy [dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba] ...
	I0701 14:18:29.620359 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba"
	I0701 14:18:29.663688 3714493 logs.go:123] Gathering logs for describe nodes ...
	I0701 14:18:29.663718 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 14:18:29.845662 3714493 logs.go:123] Gathering logs for etcd [a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025] ...
	I0701 14:18:29.845698 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025"
	I0701 14:18:29.899201 3714493 logs.go:123] Gathering logs for coredns [c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a] ...
	I0701 14:18:29.899387 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a"
	I0701 14:18:29.946145 3714493 logs.go:123] Gathering logs for kindnet [db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3] ...
	I0701 14:18:29.946174 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3"
	I0701 14:18:29.999964 3714493 out.go:304] Setting ErrFile to fd 2...
	I0701 14:18:29.999988 3714493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0701 14:18:30.000037 3714493 out.go:239] X Problems detected in kubelet:
	W0701 14:18:30.000046 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.068720    1552 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:30.000053 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.071459    1552 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:30.000059 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.071498    1552 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:30.000067 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.077204    1552 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:30.000073 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.077471    1552 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	I0701 14:18:30.000086 3714493 out.go:304] Setting ErrFile to fd 2...
	I0701 14:18:30.000091 3714493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:18:40.003404 3714493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 14:18:40.023370 3714493 api_server.go:72] duration metric: took 2m6.004737105s to wait for apiserver process to appear ...
	I0701 14:18:40.023399 3714493 api_server.go:88] waiting for apiserver healthz status ...
	I0701 14:18:40.023437 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0701 14:18:40.023501 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 14:18:40.071114 3714493 cri.go:89] found id: "a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657"
	I0701 14:18:40.071141 3714493 cri.go:89] found id: ""
	I0701 14:18:40.071149 3714493 logs.go:276] 1 containers: [a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657]
	I0701 14:18:40.071207 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:40.074805 3714493 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0701 14:18:40.074884 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 14:18:40.118691 3714493 cri.go:89] found id: "a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025"
	I0701 14:18:40.118714 3714493 cri.go:89] found id: ""
	I0701 14:18:40.118722 3714493 logs.go:276] 1 containers: [a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025]
	I0701 14:18:40.118778 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:40.123744 3714493 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0701 14:18:40.123820 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 14:18:40.166954 3714493 cri.go:89] found id: "c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a"
	I0701 14:18:40.166979 3714493 cri.go:89] found id: ""
	I0701 14:18:40.166987 3714493 logs.go:276] 1 containers: [c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a]
	I0701 14:18:40.167047 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:40.171043 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0701 14:18:40.171114 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 14:18:40.215720 3714493 cri.go:89] found id: "f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8"
	I0701 14:18:40.215744 3714493 cri.go:89] found id: ""
	I0701 14:18:40.215752 3714493 logs.go:276] 1 containers: [f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8]
	I0701 14:18:40.215812 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:40.219836 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0701 14:18:40.219910 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 14:18:40.261304 3714493 cri.go:89] found id: "dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba"
	I0701 14:18:40.261327 3714493 cri.go:89] found id: ""
	I0701 14:18:40.261335 3714493 logs.go:276] 1 containers: [dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba]
	I0701 14:18:40.261392 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:40.265186 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 14:18:40.265259 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 14:18:40.307433 3714493 cri.go:89] found id: "646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393"
	I0701 14:18:40.307457 3714493 cri.go:89] found id: ""
	I0701 14:18:40.307479 3714493 logs.go:276] 1 containers: [646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393]
	I0701 14:18:40.307536 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:40.310987 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0701 14:18:40.311060 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0701 14:18:40.349166 3714493 cri.go:89] found id: "db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3"
	I0701 14:18:40.349191 3714493 cri.go:89] found id: ""
	I0701 14:18:40.349198 3714493 logs.go:276] 1 containers: [db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3]
	I0701 14:18:40.349255 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:40.352947 3714493 logs.go:123] Gathering logs for kubelet ...
	I0701 14:18:40.352972 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 14:18:40.397772 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: W0701 14:16:33.791186    1552 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:40.398011 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: E0701 14:16:33.791243    1552 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:40.398700 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: W0701 14:16:33.815611    1552 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:40.398925 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: E0701 14:16:33.815653    1552 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:40.410767 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.068671    1552 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:40.411019 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.068720    1552 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:40.411501 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.071459    1552 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:40.411692 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.071498    1552 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:40.411858 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.077204    1552 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:40.412052 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.077471    1552 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	I0701 14:18:40.460267 3714493 logs.go:123] Gathering logs for describe nodes ...
	I0701 14:18:40.460318 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 14:18:40.627707 3714493 logs.go:123] Gathering logs for kube-scheduler [f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8] ...
	I0701 14:18:40.627740 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8"
	I0701 14:18:40.682164 3714493 logs.go:123] Gathering logs for kube-controller-manager [646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393] ...
	I0701 14:18:40.682195 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393"
	I0701 14:18:40.772273 3714493 logs.go:123] Gathering logs for container status ...
	I0701 14:18:40.772304 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 14:18:40.842797 3714493 logs.go:123] Gathering logs for kindnet [db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3] ...
	I0701 14:18:40.842828 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3"
	I0701 14:18:40.886545 3714493 logs.go:123] Gathering logs for CRI-O ...
	I0701 14:18:40.886582 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0701 14:18:40.995636 3714493 logs.go:123] Gathering logs for dmesg ...
	I0701 14:18:40.995681 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 14:18:41.016425 3714493 logs.go:123] Gathering logs for kube-apiserver [a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657] ...
	I0701 14:18:41.016462 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657"
	I0701 14:18:41.071208 3714493 logs.go:123] Gathering logs for etcd [a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025] ...
	I0701 14:18:41.071238 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025"
	I0701 14:18:41.122978 3714493 logs.go:123] Gathering logs for coredns [c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a] ...
	I0701 14:18:41.123008 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a"
	I0701 14:18:41.164189 3714493 logs.go:123] Gathering logs for kube-proxy [dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba] ...
	I0701 14:18:41.164223 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba"
	I0701 14:18:41.207147 3714493 out.go:304] Setting ErrFile to fd 2...
	I0701 14:18:41.207170 3714493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0701 14:18:41.207218 3714493 out.go:239] X Problems detected in kubelet:
	W0701 14:18:41.207233 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.068720    1552 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:41.207240 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.071459    1552 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:41.207256 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.071498    1552 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:41.207264 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.077204    1552 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:41.207277 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.077471    1552 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	I0701 14:18:41.207283 3714493 out.go:304] Setting ErrFile to fd 2...
	I0701 14:18:41.207289 3714493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:18:51.208757 3714493 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:18:51.216274 3714493 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0701 14:18:51.217497 3714493 api_server.go:141] control plane version: v1.30.2
	I0701 14:18:51.217526 3714493 api_server.go:131] duration metric: took 11.194120422s to wait for apiserver health ...
	I0701 14:18:51.217535 3714493 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 14:18:51.217558 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0701 14:18:51.217627 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 14:18:51.262838 3714493 cri.go:89] found id: "a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657"
	I0701 14:18:51.262868 3714493 cri.go:89] found id: ""
	I0701 14:18:51.262876 3714493 logs.go:276] 1 containers: [a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657]
	I0701 14:18:51.262934 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:51.266571 3714493 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0701 14:18:51.266649 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 14:18:51.313334 3714493 cri.go:89] found id: "a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025"
	I0701 14:18:51.313358 3714493 cri.go:89] found id: ""
	I0701 14:18:51.313366 3714493 logs.go:276] 1 containers: [a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025]
	I0701 14:18:51.313421 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:51.317854 3714493 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0701 14:18:51.317927 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 14:18:51.359342 3714493 cri.go:89] found id: "c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a"
	I0701 14:18:51.359362 3714493 cri.go:89] found id: ""
	I0701 14:18:51.359370 3714493 logs.go:276] 1 containers: [c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a]
	I0701 14:18:51.359425 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:51.363207 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0701 14:18:51.363282 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 14:18:51.407610 3714493 cri.go:89] found id: "f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8"
	I0701 14:18:51.407633 3714493 cri.go:89] found id: ""
	I0701 14:18:51.407640 3714493 logs.go:276] 1 containers: [f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8]
	I0701 14:18:51.407722 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:51.411297 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0701 14:18:51.411396 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 14:18:51.453297 3714493 cri.go:89] found id: "dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba"
	I0701 14:18:51.453364 3714493 cri.go:89] found id: ""
	I0701 14:18:51.453386 3714493 logs.go:276] 1 containers: [dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba]
	I0701 14:18:51.453476 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:51.457279 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 14:18:51.457393 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 14:18:51.505841 3714493 cri.go:89] found id: "646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393"
	I0701 14:18:51.505864 3714493 cri.go:89] found id: ""
	I0701 14:18:51.505872 3714493 logs.go:276] 1 containers: [646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393]
	I0701 14:18:51.505944 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:51.509555 3714493 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0701 14:18:51.509641 3714493 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0701 14:18:51.553689 3714493 cri.go:89] found id: "db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3"
	I0701 14:18:51.553753 3714493 cri.go:89] found id: ""
	I0701 14:18:51.553775 3714493 logs.go:276] 1 containers: [db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3]
	I0701 14:18:51.553861 3714493 ssh_runner.go:195] Run: which crictl
	I0701 14:18:51.557461 3714493 logs.go:123] Gathering logs for kubelet ...
	I0701 14:18:51.557545 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 14:18:51.598201 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: W0701 14:16:33.791186    1552 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:51.598453 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: E0701 14:16:33.791243    1552 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:51.599003 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: W0701 14:16:33.815611    1552 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:51.599206 3714493 logs.go:138] Found kubelet problem: Jul 01 14:16:33 addons-929335 kubelet[1552]: E0701 14:16:33.815653    1552 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:51.610045 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.068671    1552 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:51.610253 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.068720    1552 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:51.610715 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.071459    1552 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:51.610908 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.071498    1552 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:51.611074 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.077204    1552 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:51.611264 3714493 logs.go:138] Found kubelet problem: Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.077471    1552 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	I0701 14:18:51.656841 3714493 logs.go:123] Gathering logs for dmesg ...
	I0701 14:18:51.656865 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 14:18:51.676175 3714493 logs.go:123] Gathering logs for etcd [a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025] ...
	I0701 14:18:51.676204 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025"
	I0701 14:18:51.733448 3714493 logs.go:123] Gathering logs for kube-scheduler [f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8] ...
	I0701 14:18:51.733480 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8"
	I0701 14:18:51.780486 3714493 logs.go:123] Gathering logs for kube-controller-manager [646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393] ...
	I0701 14:18:51.780516 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393"
	I0701 14:18:51.852657 3714493 logs.go:123] Gathering logs for CRI-O ...
	I0701 14:18:51.852773 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0701 14:18:51.943875 3714493 logs.go:123] Gathering logs for describe nodes ...
	I0701 14:18:51.943911 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 14:18:52.078224 3714493 logs.go:123] Gathering logs for kube-apiserver [a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657] ...
	I0701 14:18:52.078254 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657"
	I0701 14:18:52.141339 3714493 logs.go:123] Gathering logs for coredns [c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a] ...
	I0701 14:18:52.141370 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a"
	I0701 14:18:52.180565 3714493 logs.go:123] Gathering logs for kube-proxy [dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba] ...
	I0701 14:18:52.180595 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba"
	I0701 14:18:52.242104 3714493 logs.go:123] Gathering logs for kindnet [db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3] ...
	I0701 14:18:52.242136 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3"
	I0701 14:18:52.280088 3714493 logs.go:123] Gathering logs for container status ...
	I0701 14:18:52.280116 3714493 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 14:18:52.341462 3714493 out.go:304] Setting ErrFile to fd 2...
	I0701 14:18:52.341490 3714493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0701 14:18:52.341544 3714493 out.go:239] X Problems detected in kubelet:
	W0701 14:18:52.341553 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.068720    1552 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:52.341562 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.071459    1552 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:52.341575 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.071498    1552 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-929335" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-929335' and this object
	W0701 14:18:52.341583 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: W0701 14:17:08.077204    1552 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	W0701 14:18:52.341591 3714493 out.go:239]   Jul 01 14:17:08 addons-929335 kubelet[1552]: E0701 14:17:08.077471    1552 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-929335" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-929335' and this object
	I0701 14:18:52.341597 3714493 out.go:304] Setting ErrFile to fd 2...
	I0701 14:18:52.341608 3714493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:19:02.361276 3714493 system_pods.go:59] 18 kube-system pods found
	I0701 14:19:02.361314 3714493 system_pods.go:61] "coredns-7db6d8ff4d-s8jw9" [7ec40280-c5a3-4403-8f98-39eaa3f29e2c] Running
	I0701 14:19:02.361327 3714493 system_pods.go:61] "csi-hostpath-attacher-0" [2c927ce0-3ecd-4174-94fe-3e73008a24eb] Running
	I0701 14:19:02.361333 3714493 system_pods.go:61] "csi-hostpath-resizer-0" [24cb89c7-79cc-4c66-8046-84cf4c819fd4] Running
	I0701 14:19:02.361338 3714493 system_pods.go:61] "csi-hostpathplugin-mcv65" [4ad794ec-8d44-48b4-94fd-ab0605d8f2b1] Running
	I0701 14:19:02.361342 3714493 system_pods.go:61] "etcd-addons-929335" [0664c0af-c270-4fcb-8bb4-cc76248cf3ea] Running
	I0701 14:19:02.361346 3714493 system_pods.go:61] "kindnet-nzscv" [9aec9a7c-149e-4bec-b5c3-0524417a5272] Running
	I0701 14:19:02.361351 3714493 system_pods.go:61] "kube-apiserver-addons-929335" [af03cc27-972b-4106-b44b-de7d69eab5a6] Running
	I0701 14:19:02.361359 3714493 system_pods.go:61] "kube-controller-manager-addons-929335" [efe0db8d-b63f-4b20-b0b5-6e1036c91627] Running
	I0701 14:19:02.361381 3714493 system_pods.go:61] "kube-ingress-dns-minikube" [25af24ab-7674-4c32-b452-00053e068d4c] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0701 14:19:02.361399 3714493 system_pods.go:61] "kube-proxy-b7sh5" [0ae5c8da-8e2e-4513-9c4a-058705a64586] Running
	I0701 14:19:02.361405 3714493 system_pods.go:61] "kube-scheduler-addons-929335" [12031591-d005-45c2-8097-53a52d94b85b] Running
	I0701 14:19:02.361409 3714493 system_pods.go:61] "metrics-server-c59844bb4-7ddxq" [d044ed9e-3f07-4293-b20a-7710385bba17] Running
	I0701 14:19:02.361417 3714493 system_pods.go:61] "nvidia-device-plugin-daemonset-ssxlb" [07a73834-f2a1-49e5-ae9a-e15bee08c8ab] Running
	I0701 14:19:02.361425 3714493 system_pods.go:61] "registry-bnzqk" [710fb3bb-d2cb-4fb1-a706-25569704842a] Running
	I0701 14:19:02.361428 3714493 system_pods.go:61] "registry-proxy-cwtgh" [d522d504-68de-46ed-a686-4cb3f3054752] Running
	I0701 14:19:02.361432 3714493 system_pods.go:61] "snapshot-controller-745499f584-44clr" [46fcf348-5b93-443d-ad6c-9460a5abac66] Running
	I0701 14:19:02.361436 3714493 system_pods.go:61] "snapshot-controller-745499f584-f9c4l" [5d61a185-32fe-4b26-adf3-25413d4c354d] Running
	I0701 14:19:02.361444 3714493 system_pods.go:61] "storage-provisioner" [336d511c-48f8-41ab-9e80-73414eb12f55] Running
	I0701 14:19:02.361450 3714493 system_pods.go:74] duration metric: took 11.143908631s to wait for pod list to return data ...
	I0701 14:19:02.361465 3714493 default_sa.go:34] waiting for default service account to be created ...
	I0701 14:19:02.372349 3714493 default_sa.go:45] found service account: "default"
	I0701 14:19:02.372379 3714493 default_sa.go:55] duration metric: took 10.901289ms for default service account to be created ...
	I0701 14:19:02.372390 3714493 system_pods.go:116] waiting for k8s-apps to be running ...
	I0701 14:19:02.382702 3714493 system_pods.go:86] 18 kube-system pods found
	I0701 14:19:02.382740 3714493 system_pods.go:89] "coredns-7db6d8ff4d-s8jw9" [7ec40280-c5a3-4403-8f98-39eaa3f29e2c] Running
	I0701 14:19:02.382748 3714493 system_pods.go:89] "csi-hostpath-attacher-0" [2c927ce0-3ecd-4174-94fe-3e73008a24eb] Running
	I0701 14:19:02.382753 3714493 system_pods.go:89] "csi-hostpath-resizer-0" [24cb89c7-79cc-4c66-8046-84cf4c819fd4] Running
	I0701 14:19:02.382758 3714493 system_pods.go:89] "csi-hostpathplugin-mcv65" [4ad794ec-8d44-48b4-94fd-ab0605d8f2b1] Running
	I0701 14:19:02.382762 3714493 system_pods.go:89] "etcd-addons-929335" [0664c0af-c270-4fcb-8bb4-cc76248cf3ea] Running
	I0701 14:19:02.382767 3714493 system_pods.go:89] "kindnet-nzscv" [9aec9a7c-149e-4bec-b5c3-0524417a5272] Running
	I0701 14:19:02.382771 3714493 system_pods.go:89] "kube-apiserver-addons-929335" [af03cc27-972b-4106-b44b-de7d69eab5a6] Running
	I0701 14:19:02.382775 3714493 system_pods.go:89] "kube-controller-manager-addons-929335" [efe0db8d-b63f-4b20-b0b5-6e1036c91627] Running
	I0701 14:19:02.382785 3714493 system_pods.go:89] "kube-ingress-dns-minikube" [25af24ab-7674-4c32-b452-00053e068d4c] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0701 14:19:02.382791 3714493 system_pods.go:89] "kube-proxy-b7sh5" [0ae5c8da-8e2e-4513-9c4a-058705a64586] Running
	I0701 14:19:02.382799 3714493 system_pods.go:89] "kube-scheduler-addons-929335" [12031591-d005-45c2-8097-53a52d94b85b] Running
	I0701 14:19:02.382803 3714493 system_pods.go:89] "metrics-server-c59844bb4-7ddxq" [d044ed9e-3f07-4293-b20a-7710385bba17] Running
	I0701 14:19:02.382807 3714493 system_pods.go:89] "nvidia-device-plugin-daemonset-ssxlb" [07a73834-f2a1-49e5-ae9a-e15bee08c8ab] Running
	I0701 14:19:02.382811 3714493 system_pods.go:89] "registry-bnzqk" [710fb3bb-d2cb-4fb1-a706-25569704842a] Running
	I0701 14:19:02.382817 3714493 system_pods.go:89] "registry-proxy-cwtgh" [d522d504-68de-46ed-a686-4cb3f3054752] Running
	I0701 14:19:02.382822 3714493 system_pods.go:89] "snapshot-controller-745499f584-44clr" [46fcf348-5b93-443d-ad6c-9460a5abac66] Running
	I0701 14:19:02.382829 3714493 system_pods.go:89] "snapshot-controller-745499f584-f9c4l" [5d61a185-32fe-4b26-adf3-25413d4c354d] Running
	I0701 14:19:02.382833 3714493 system_pods.go:89] "storage-provisioner" [336d511c-48f8-41ab-9e80-73414eb12f55] Running
	I0701 14:19:02.382840 3714493 system_pods.go:126] duration metric: took 10.445374ms to wait for k8s-apps to be running ...
	I0701 14:19:02.382853 3714493 system_svc.go:44] waiting for kubelet service to be running ....
	I0701 14:19:02.382917 3714493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 14:19:02.395275 3714493 system_svc.go:56] duration metric: took 12.412197ms WaitForService to wait for kubelet
	I0701 14:19:02.395315 3714493 kubeadm.go:576] duration metric: took 2m28.376687186s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 14:19:02.395336 3714493 node_conditions.go:102] verifying NodePressure condition ...
	I0701 14:19:02.399242 3714493 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0701 14:19:02.399276 3714493 node_conditions.go:123] node cpu capacity is 2
	I0701 14:19:02.399289 3714493 node_conditions.go:105] duration metric: took 3.948028ms to run NodePressure ...
	I0701 14:19:02.399302 3714493 start.go:240] waiting for startup goroutines ...
	I0701 14:19:02.399310 3714493 start.go:245] waiting for cluster config update ...
	I0701 14:19:02.399326 3714493 start.go:254] writing updated cluster config ...
	I0701 14:19:02.399630 3714493 ssh_runner.go:195] Run: rm -f paused
	I0701 14:19:02.737108 3714493 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0701 14:19:02.739247 3714493 out.go:177] * Done! kubectl is now configured to use "addons-929335" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 01 14:24:20 addons-929335 crio[962]: time="2024-07-01 14:24:20.149359479Z" level=info msg="Removing pod sandbox: ceb3c146cda59ef6c10b2c1c50d08516933ba2966e6ba3095805af99460fe111" id=a0d1046d-3522-4914-a5a2-fae320aaf71c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 01 14:24:20 addons-929335 crio[962]: time="2024-07-01 14:24:20.166051460Z" level=info msg="Removed pod sandbox: ceb3c146cda59ef6c10b2c1c50d08516933ba2966e6ba3095805af99460fe111" id=a0d1046d-3522-4914-a5a2-fae320aaf71c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 01 14:24:20 addons-929335 crio[962]: time="2024-07-01 14:24:20.166645402Z" level=info msg="Stopping pod sandbox: 4ea9091ce2577ae3a3b51accf2c59b1ff1249cf54b3a141e2f1e9462c56c9b90" id=dca95828-9c03-4964-b444-76b5df75f8dc name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 01 14:24:20 addons-929335 crio[962]: time="2024-07-01 14:24:20.166695175Z" level=info msg="Stopped pod sandbox (already stopped): 4ea9091ce2577ae3a3b51accf2c59b1ff1249cf54b3a141e2f1e9462c56c9b90" id=dca95828-9c03-4964-b444-76b5df75f8dc name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 01 14:24:20 addons-929335 crio[962]: time="2024-07-01 14:24:20.167146805Z" level=info msg="Removing pod sandbox: 4ea9091ce2577ae3a3b51accf2c59b1ff1249cf54b3a141e2f1e9462c56c9b90" id=7515f07f-7720-4ee4-b5e3-5ac68c484990 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 01 14:24:20 addons-929335 crio[962]: time="2024-07-01 14:24:20.177121246Z" level=info msg="Removed pod sandbox: 4ea9091ce2577ae3a3b51accf2c59b1ff1249cf54b3a141e2f1e9462c56c9b90" id=7515f07f-7720-4ee4-b5e3-5ac68c484990 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 01 14:24:20 addons-929335 crio[962]: time="2024-07-01 14:24:20.177630469Z" level=info msg="Stopping pod sandbox: 40813e8717ef5bfb2327e10034ae821c14d60bf2dc314dd1f8e6db3c86aa14b7" id=3b8b6fce-ecd9-43f4-b1ba-19196852a349 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 01 14:24:20 addons-929335 crio[962]: time="2024-07-01 14:24:20.177665374Z" level=info msg="Stopped pod sandbox (already stopped): 40813e8717ef5bfb2327e10034ae821c14d60bf2dc314dd1f8e6db3c86aa14b7" id=3b8b6fce-ecd9-43f4-b1ba-19196852a349 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 01 14:24:20 addons-929335 crio[962]: time="2024-07-01 14:24:20.178071662Z" level=info msg="Removing pod sandbox: 40813e8717ef5bfb2327e10034ae821c14d60bf2dc314dd1f8e6db3c86aa14b7" id=f09a79bd-0af0-4207-bc74-63069b8e09c8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 01 14:24:20 addons-929335 crio[962]: time="2024-07-01 14:24:20.188437597Z" level=info msg="Removed pod sandbox: 40813e8717ef5bfb2327e10034ae821c14d60bf2dc314dd1f8e6db3c86aa14b7" id=f09a79bd-0af0-4207-bc74-63069b8e09c8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 01 14:24:20 addons-929335 crio[962]: time="2024-07-01 14:24:20.188942898Z" level=info msg="Stopping pod sandbox: da43fbca5d890db0a4737344acb6909ae3e7808893da891d476ae8d3b6f3e870" id=0486af47-96dd-4411-b0d9-fccee8cac03b name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 01 14:24:20 addons-929335 crio[962]: time="2024-07-01 14:24:20.188975022Z" level=info msg="Stopped pod sandbox (already stopped): da43fbca5d890db0a4737344acb6909ae3e7808893da891d476ae8d3b6f3e870" id=0486af47-96dd-4411-b0d9-fccee8cac03b name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 01 14:24:20 addons-929335 crio[962]: time="2024-07-01 14:24:20.189496995Z" level=info msg="Removing pod sandbox: da43fbca5d890db0a4737344acb6909ae3e7808893da891d476ae8d3b6f3e870" id=8f620380-fc73-495e-9bb0-a6a3628adf3c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 01 14:24:20 addons-929335 crio[962]: time="2024-07-01 14:24:20.200144090Z" level=info msg="Removed pod sandbox: da43fbca5d890db0a4737344acb6909ae3e7808893da891d476ae8d3b6f3e870" id=8f620380-fc73-495e-9bb0-a6a3628adf3c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 01 14:24:20 addons-929335 crio[962]: time="2024-07-01 14:24:20.200643237Z" level=info msg="Stopping pod sandbox: 0bef94b2fe69f46bddfd9a43a31cc0ee74f7fc3cfd4399c1d23d924266c2b435" id=82da4f7d-6072-4e4a-a899-09b825feafb5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 01 14:24:20 addons-929335 crio[962]: time="2024-07-01 14:24:20.200684796Z" level=info msg="Stopped pod sandbox (already stopped): 0bef94b2fe69f46bddfd9a43a31cc0ee74f7fc3cfd4399c1d23d924266c2b435" id=82da4f7d-6072-4e4a-a899-09b825feafb5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 01 14:24:20 addons-929335 crio[962]: time="2024-07-01 14:24:20.201387498Z" level=info msg="Removing pod sandbox: 0bef94b2fe69f46bddfd9a43a31cc0ee74f7fc3cfd4399c1d23d924266c2b435" id=788c2e0f-6050-431c-92e3-addc0a7ce535 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 01 14:24:20 addons-929335 crio[962]: time="2024-07-01 14:24:20.212654208Z" level=info msg="Removed pod sandbox: 0bef94b2fe69f46bddfd9a43a31cc0ee74f7fc3cfd4399c1d23d924266c2b435" id=788c2e0f-6050-431c-92e3-addc0a7ce535 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Jul 01 14:24:38 addons-929335 crio[962]: time="2024-07-01 14:24:38.607632897Z" level=info msg="Stopping container: e9ea99849b7a1556a4bbcbfd89c532a93df40ecfb61c32791d87fcf8fcab151e (timeout: 30s)" id=81f8a036-1201-4ee4-b94b-27eb94fea5b9 name=/runtime.v1.RuntimeService/StopContainer
	Jul 01 14:24:39 addons-929335 crio[962]: time="2024-07-01 14:24:39.775876400Z" level=info msg="Stopped container e9ea99849b7a1556a4bbcbfd89c532a93df40ecfb61c32791d87fcf8fcab151e: kube-system/metrics-server-c59844bb4-7ddxq/metrics-server" id=81f8a036-1201-4ee4-b94b-27eb94fea5b9 name=/runtime.v1.RuntimeService/StopContainer
	Jul 01 14:24:39 addons-929335 crio[962]: time="2024-07-01 14:24:39.776778980Z" level=info msg="Stopping pod sandbox: 316c63aa6830a76f42342f8c92fbd89b8effa5b168b665c209bbe150a45b1f75" id=2cc71d76-7aea-4745-8f6e-2a09d60fadc0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 01 14:24:39 addons-929335 crio[962]: time="2024-07-01 14:24:39.776993694Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-7ddxq Namespace:kube-system ID:316c63aa6830a76f42342f8c92fbd89b8effa5b168b665c209bbe150a45b1f75 UID:d044ed9e-3f07-4293-b20a-7710385bba17 NetNS:/var/run/netns/6b2d8c49-2913-47ea-9b81-baed8315dec0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 01 14:24:39 addons-929335 crio[962]: time="2024-07-01 14:24:39.777186861Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-7ddxq from CNI network \"kindnet\" (type=ptp)"
	Jul 01 14:24:39 addons-929335 crio[962]: time="2024-07-01 14:24:39.815209603Z" level=info msg="Stopped pod sandbox: 316c63aa6830a76f42342f8c92fbd89b8effa5b168b665c209bbe150a45b1f75" id=2cc71d76-7aea-4745-8f6e-2a09d60fadc0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 01 14:24:39 addons-929335 crio[962]: time="2024-07-01 14:24:39.871310170Z" level=info msg="Removing container: e9ea99849b7a1556a4bbcbfd89c532a93df40ecfb61c32791d87fcf8fcab151e" id=fa2d8cc8-7341-4359-97c7-506feee041db name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                          CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	728597a1e21a9       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                               33 seconds ago      Exited              hello-world-app           3                   c530951965dee       hello-world-app-86c47465fc-t4bld
	816a344cbbf06       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                3 minutes ago       Running             nginx                     0                   4b4737705ddc8       nginx
	1726107bb7a18       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37          5 minutes ago       Running             headlamp                  0                   0beb7a74486ef       headlamp-7867546754-jbhwq
	bcb9f3a8b177e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69   6 minutes ago       Running             gcp-auth                  0                   937faf0acb5f8       gcp-auth-5db96cd9b4-zzdzf
	f4f2268f451b3       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                7 minutes ago       Running             yakd                      0                   ad1a1e08aa277       yakd-dashboard-799879c74f-k9fkr
	c7a57f061ff4a       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93                                               7 minutes ago       Running             coredns                   0                   e8dd3c5d29672       coredns-7db6d8ff4d-s8jw9
	cd615c19161cd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                               7 minutes ago       Running             storage-provisioner       0                   fc87ffa26b343       storage-provisioner
	dafa28039c484       66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae                                               8 minutes ago       Running             kube-proxy                0                   1892d9010797f       kube-proxy-b7sh5
	db206e1b79fd3       89d73d416b992e8f9602b67b4614d9e7f0655aebb3696e18efec695e0b654c40                                               8 minutes ago       Running             kindnet-cni               0                   0634e6cdeec1f       kindnet-nzscv
	a8156a2a69e7a       84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0                                               8 minutes ago       Running             kube-apiserver            0                   b6e2dbefac823       kube-apiserver-addons-929335
	a5290b2c5513d       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd                                               8 minutes ago       Running             etcd                      0                   77244071fc809       etcd-addons-929335
	646ad903c2a53       e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567                                               8 minutes ago       Running             kube-controller-manager   0                   f8a7cc5b1d3cf       kube-controller-manager-addons-929335
	f433fbd81a7c4       c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5                                               8 minutes ago       Running             kube-scheduler            0                   751716a05cad0       kube-scheduler-addons-929335
	
	
	==> coredns [c7a57f061ff4a151d15d430b83dde99c1df625beb614950951aa45f85f78d76a] <==
	[INFO] 10.244.0.19:40367 - 9202 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000372369s
	[INFO] 10.244.0.19:60297 - 58306 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002308197s
	[INFO] 10.244.0.19:40367 - 22993 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002149033s
	[INFO] 10.244.0.19:40367 - 14452 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002168922s
	[INFO] 10.244.0.19:60297 - 47707 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002573265s
	[INFO] 10.244.0.19:40367 - 63736 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000116571s
	[INFO] 10.244.0.19:60297 - 39186 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000059915s
	[INFO] 10.244.0.19:47259 - 19264 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000101367s
	[INFO] 10.244.0.19:56453 - 43442 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000054958s
	[INFO] 10.244.0.19:56453 - 4050 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00006186s
	[INFO] 10.244.0.19:47259 - 44322 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000066938s
	[INFO] 10.244.0.19:56453 - 55262 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055795s
	[INFO] 10.244.0.19:47259 - 53949 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051873s
	[INFO] 10.244.0.19:56453 - 59506 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00006478s
	[INFO] 10.244.0.19:47259 - 52115 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000047754s
	[INFO] 10.244.0.19:56453 - 14654 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050274s
	[INFO] 10.244.0.19:56453 - 61532 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00006062s
	[INFO] 10.244.0.19:47259 - 18474 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000080632s
	[INFO] 10.244.0.19:47259 - 19652 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000075119s
	[INFO] 10.244.0.19:56453 - 25691 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001443443s
	[INFO] 10.244.0.19:47259 - 43778 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.0010961s
	[INFO] 10.244.0.19:56453 - 43564 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001001527s
	[INFO] 10.244.0.19:56453 - 19462 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000067693s
	[INFO] 10.244.0.19:47259 - 46001 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001038565s
	[INFO] 10.244.0.19:47259 - 13415 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000054917s
	
	
	==> describe nodes <==
	Name:               addons-929335
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-929335
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=addons-929335
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_01T14_16_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-929335
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 14:16:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-929335
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 14:24:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 14:23:59 +0000   Mon, 01 Jul 2024 14:16:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 14:23:59 +0000   Mon, 01 Jul 2024 14:16:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 14:23:59 +0000   Mon, 01 Jul 2024 14:16:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 14:23:59 +0000   Mon, 01 Jul 2024 14:17:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-929335
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9d0ede6bb1d4bb381c7b3fce060be76
	  System UUID:                fcb5e7bf-e654-480c-840b-846ff4889ec5
	  Boot ID:                    030faa4f-44aa-434e-978f-182f6d212f48
	  Kernel Version:             5.15.0-1063-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-t4bld         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  gcp-auth                    gcp-auth-5db96cd9b4-zzdzf                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m56s
	  headlamp                    headlamp-7867546754-jbhwq                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m37s
	  kube-system                 coredns-7db6d8ff4d-s8jw9                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m7s
	  kube-system                 etcd-addons-929335                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m21s
	  kube-system                 kindnet-nzscv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m7s
	  kube-system                 kube-apiserver-addons-929335             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m22s
	  kube-system                 kube-controller-manager-addons-929335    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	  kube-system                 kube-proxy-b7sh5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	  kube-system                 kube-scheduler-addons-929335             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	  yakd-dashboard              yakd-dashboard-799879c74f-k9fkr          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     8m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m     kube-proxy       
	  Normal  Starting                 8m21s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m21s  kubelet          Node addons-929335 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m21s  kubelet          Node addons-929335 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m21s  kubelet          Node addons-929335 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m7s   node-controller  Node addons-929335 event: Registered Node addons-929335 in Controller
	  Normal  NodeReady                7m33s  kubelet          Node addons-929335 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.001028] FS-Cache: O-key=[8] '8b8e3b0000000000'
	[  +0.000694] FS-Cache: N-cookie c=000001e0 [p=000001d7 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000ac0c5ba0{9p.inode} n=00000000045eedfb
	[  +0.001018] FS-Cache: N-key=[8] '8b8e3b0000000000'
	[  +0.014530] FS-Cache: Duplicate cookie detected
	[  +0.000695] FS-Cache: O-cookie c=000001da [p=000001d7 fl=226 nc=0 na=1]
	[  +0.000939] FS-Cache: O-cookie d=00000000ac0c5ba0{9p.inode} n=000000001f9e9a8e
	[  +0.001023] FS-Cache: O-key=[8] '8b8e3b0000000000'
	[  +0.000689] FS-Cache: N-cookie c=000001e1 [p=000001d7 fl=2 nc=0 na=1]
	[  +0.000935] FS-Cache: N-cookie d=00000000ac0c5ba0{9p.inode} n=00000000c4ff6e50
	[  +0.001026] FS-Cache: N-key=[8] '8b8e3b0000000000'
	[  +2.755378] FS-Cache: Duplicate cookie detected
	[  +0.000724] FS-Cache: O-cookie c=000001d8 [p=000001d7 fl=226 nc=0 na=1]
	[  +0.000958] FS-Cache: O-cookie d=00000000ac0c5ba0{9p.inode} n=00000000dd0e7f7e
	[  +0.001033] FS-Cache: O-key=[8] '8a8e3b0000000000'
	[  +0.000734] FS-Cache: N-cookie c=000001e3 [p=000001d7 fl=2 nc=0 na=1]
	[  +0.000927] FS-Cache: N-cookie d=00000000ac0c5ba0{9p.inode} n=00000000045eedfb
	[  +0.001022] FS-Cache: N-key=[8] '8a8e3b0000000000'
	[  +0.295007] FS-Cache: Duplicate cookie detected
	[  +0.000757] FS-Cache: O-cookie c=000001dd [p=000001d7 fl=226 nc=0 na=1]
	[  +0.000956] FS-Cache: O-cookie d=00000000ac0c5ba0{9p.inode} n=000000002ac53bcf
	[  +0.001042] FS-Cache: O-key=[8] '908e3b0000000000'
	[  +0.000722] FS-Cache: N-cookie c=000001e4 [p=000001d7 fl=2 nc=0 na=1]
	[  +0.000933] FS-Cache: N-cookie d=00000000ac0c5ba0{9p.inode} n=00000000dca0f41c
	[  +0.001038] FS-Cache: N-key=[8] '908e3b0000000000'
	
	
	==> etcd [a5290b2c5513d5a3bbd472b9f73b2671ed866a11184aedc0717ebcac871af025] <==
	{"level":"info","ts":"2024-07-01T14:16:13.257178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-01T14:16:13.257188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-07-01T14:16:13.257196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-01T14:16:13.26124Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-929335 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-01T14:16:13.265123Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-01T14:16:13.265282Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-01T14:16:13.265578Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-01T14:16:13.267232Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-01T14:16:13.270535Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-01T14:16:13.281377Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-01T14:16:13.281949Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-01T14:16:13.281444Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-01T14:16:13.324782Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-01T14:16:13.324898Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-01T14:16:36.435212Z","caller":"traceutil/trace.go:171","msg":"trace[1642189225] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"108.969832ms","start":"2024-07-01T14:16:36.32622Z","end":"2024-07-01T14:16:36.43519Z","steps":["trace[1642189225] 'process raft request'  (duration: 11.836684ms)","trace[1642189225] 'compare'  (duration: 81.260435ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-01T14:16:36.538281Z","caller":"traceutil/trace.go:171","msg":"trace[1453600289] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"121.328087ms","start":"2024-07-01T14:16:36.416937Z","end":"2024-07-01T14:16:36.538265Z","steps":["trace[1453600289] 'process raft request'  (duration: 121.206797ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-01T14:16:37.229631Z","caller":"traceutil/trace.go:171","msg":"trace[437676447] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"134.004714ms","start":"2024-07-01T14:16:37.095608Z","end":"2024-07-01T14:16:37.229613Z","steps":["trace[437676447] 'process raft request'  (duration: 97.681552ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-01T14:16:37.232918Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.182642ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-01T14:16:37.233096Z","caller":"traceutil/trace.go:171","msg":"trace[1873488679] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:371; }","duration":"137.363239ms","start":"2024-07-01T14:16:37.095716Z","end":"2024-07-01T14:16:37.233079Z","steps":["trace[1873488679] 'agreement among raft nodes before linearized reading'  (duration: 137.152709ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-01T14:16:37.233547Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.481765ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-929335\" ","response":"range_response_count:1 size:5744"}
	{"level":"info","ts":"2024-07-01T14:16:37.239983Z","caller":"traceutil/trace.go:171","msg":"trace[562760670] range","detail":"{range_begin:/registry/minions/addons-929335; range_end:; response_count:1; response_revision:371; }","duration":"137.931072ms","start":"2024-07-01T14:16:37.095688Z","end":"2024-07-01T14:16:37.233619Z","steps":["trace[562760670] 'agreement among raft nodes before linearized reading'  (duration: 97.703821ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-01T14:16:37.633465Z","caller":"traceutil/trace.go:171","msg":"trace[1799736065] linearizableReadLoop","detail":"{readStateIndex:385; appliedIndex:384; }","duration":"103.3815ms","start":"2024-07-01T14:16:37.530067Z","end":"2024-07-01T14:16:37.633448Z","steps":["trace[1799736065] 'read index received'  (duration: 42.997053ms)","trace[1799736065] 'applied index is now lower than readState.Index'  (duration: 60.383503ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-01T14:16:37.633621Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.536979ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
	{"level":"info","ts":"2024-07-01T14:16:37.633644Z","caller":"traceutil/trace.go:171","msg":"trace[393152311] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:374; }","duration":"103.576086ms","start":"2024-07-01T14:16:37.530061Z","end":"2024-07-01T14:16:37.633637Z","steps":["trace[393152311] 'agreement among raft nodes before linearized reading'  (duration: 103.468106ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-01T14:16:37.633847Z","caller":"traceutil/trace.go:171","msg":"trace[1893789424] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"131.925726ms","start":"2024-07-01T14:16:37.501909Z","end":"2024-07-01T14:16:37.633835Z","steps":["trace[1893789424] 'process raft request'  (duration: 71.148168ms)","trace[1893789424] 'compare'  (duration: 60.270312ms)"],"step_count":2}
	
	
	==> gcp-auth [bcb9f3a8b177e26d49ce5f5b002574f3acd6226509cf5871f475678d5732846c] <==
	2024/07/01 14:18:14 GCP Auth Webhook started!
	2024/07/01 14:19:03 Ready to marshal response ...
	2024/07/01 14:19:03 Ready to write response ...
	2024/07/01 14:19:03 Ready to marshal response ...
	2024/07/01 14:19:03 Ready to write response ...
	2024/07/01 14:19:03 Ready to marshal response ...
	2024/07/01 14:19:03 Ready to write response ...
	2024/07/01 14:19:14 Ready to marshal response ...
	2024/07/01 14:19:14 Ready to write response ...
	2024/07/01 14:19:20 Ready to marshal response ...
	2024/07/01 14:19:20 Ready to write response ...
	2024/07/01 14:19:20 Ready to marshal response ...
	2024/07/01 14:19:20 Ready to write response ...
	2024/07/01 14:19:27 Ready to marshal response ...
	2024/07/01 14:19:27 Ready to write response ...
	2024/07/01 14:20:14 Ready to marshal response ...
	2024/07/01 14:20:14 Ready to write response ...
	2024/07/01 14:20:46 Ready to marshal response ...
	2024/07/01 14:20:46 Ready to write response ...
	2024/07/01 14:21:03 Ready to marshal response ...
	2024/07/01 14:21:03 Ready to write response ...
	2024/07/01 14:23:23 Ready to marshal response ...
	2024/07/01 14:23:23 Ready to write response ...
	
	
	==> kernel <==
	 14:24:40 up 1 day, 22:07,  0 users,  load average: 0.38, 0.85, 1.68
	Linux addons-929335 5.15.0-1063-aws #69~20.04.1-Ubuntu SMP Fri May 10 19:21:30 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [db206e1b79fd340c8ff68753272092a66bb0ca3c5c4da453bee355570e8c95c3] <==
	I0701 14:22:38.053418       1 main.go:227] handling current node
	I0701 14:22:48.063560       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:22:48.063599       1 main.go:227] handling current node
	I0701 14:22:58.069377       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:22:58.069406       1 main.go:227] handling current node
	I0701 14:23:08.080845       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:23:08.080967       1 main.go:227] handling current node
	I0701 14:23:18.087285       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:23:18.087321       1 main.go:227] handling current node
	I0701 14:23:28.093401       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:23:28.093447       1 main.go:227] handling current node
	I0701 14:23:38.097273       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:23:38.097305       1 main.go:227] handling current node
	I0701 14:23:48.109727       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:23:48.109758       1 main.go:227] handling current node
	I0701 14:23:58.114062       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:23:58.114092       1 main.go:227] handling current node
	I0701 14:24:08.125772       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:24:08.125799       1 main.go:227] handling current node
	I0701 14:24:18.129690       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:24:18.129721       1 main.go:227] handling current node
	I0701 14:24:28.142101       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:24:28.142128       1 main.go:227] handling current node
	I0701 14:24:38.145813       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:24:38.145846       1 main.go:227] handling current node
	
	
	==> kube-apiserver [a8156a2a69e7ae02e5e72b7567252eb9769ebd368202f6f91a59f07c20f25657] <==
	E0701 14:18:33.700239       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0701 14:18:33.700146       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.85.235:443/apis/metrics.k8s.io/v1beta1: Get "https://10.103.85.235:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.103.85.235:443: i/o timeout
	I0701 14:18:33.755720       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0701 14:18:33.768459       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0701 14:19:03.660593       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.175.140"}
	E0701 14:19:43.841124       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0701 14:20:25.663964       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0701 14:21:02.326603       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0701 14:21:02.326753       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0701 14:21:02.356744       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0701 14:21:02.356790       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0701 14:21:02.377756       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0701 14:21:02.377884       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0701 14:21:02.416248       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0701 14:21:02.416924       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0701 14:21:02.915268       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0701 14:21:03.219137       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.184.120"}
	W0701 14:21:03.366073       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0701 14:21:03.417317       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0701 14:21:03.422258       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0701 14:23:24.106164       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.179.149"}
	E0701 14:23:40.785698       1 watch.go:250] http2: stream closed
	I0701 14:23:57.445227       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0701 14:23:58.494565       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [646ad903c2a5304751f5f77a05c9129e14ba152f66a2be8e3401aba05db38393] <==
	I0701 14:23:51.495238       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I0701 14:23:54.706012       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="44.103µs"
	E0701 14:23:58.496035       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0701 14:23:59.197875       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0701 14:23:59.197917       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0701 14:23:59.822466       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0701 14:23:59.822502       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0701 14:24:02.190579       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0701 14:24:02.190617       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0701 14:24:04.147027       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0701 14:24:04.147073       1 shared_informer.go:320] Caches are synced for resource quota
	I0701 14:24:04.612804       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0701 14:24:04.612855       1 shared_informer.go:320] Caches are synced for garbage collector
	I0701 14:24:06.824411       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="140.867µs"
	I0701 14:24:07.574592       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0701 14:24:08.356206       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0701 14:24:08.356246       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0701 14:24:18.619842       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0701 14:24:18.619879       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0701 14:24:20.267219       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0701 14:24:20.267261       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0701 14:24:20.706264       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="42.749µs"
	W0701 14:24:36.271649       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0701 14:24:36.271693       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0701 14:24:38.578795       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="4.02µs"
	
	
	==> kube-proxy [dafa28039c4841368c227b1cc5fa438574aa5ef26be86afdb808b408ec61ecba] <==
	I0701 14:16:39.287139       1 server_linux.go:69] "Using iptables proxy"
	I0701 14:16:39.484252       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0701 14:16:39.736474       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0701 14:16:39.736537       1 server_linux.go:165] "Using iptables Proxier"
	I0701 14:16:39.774595       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0701 14:16:39.774697       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0701 14:16:39.774745       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0701 14:16:39.775014       1 server.go:872] "Version info" version="v1.30.2"
	I0701 14:16:39.775080       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 14:16:39.819782       1 config.go:192] "Starting service config controller"
	I0701 14:16:39.819877       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0701 14:16:39.820069       1 config.go:101] "Starting endpoint slice config controller"
	I0701 14:16:39.847398       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0701 14:16:39.820539       1 config.go:319] "Starting node config controller"
	I0701 14:16:39.850820       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0701 14:16:39.945864       1 shared_informer.go:320] Caches are synced for service config
	I0701 14:16:39.953112       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0701 14:16:39.960916       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f433fbd81a7c432d6358361f8cafded5f8ef95bddb397242e11056291e318fa8] <==
	W0701 14:16:17.168361       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0701 14:16:17.168381       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0701 14:16:17.168429       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 14:16:17.168444       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0701 14:16:17.168488       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0701 14:16:17.168505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0701 14:16:17.168595       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0701 14:16:17.168609       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0701 14:16:17.168643       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0701 14:16:17.168695       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0701 14:16:17.168782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 14:16:17.168801       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0701 14:16:17.168860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0701 14:16:17.168876       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0701 14:16:17.997209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 14:16:17.997337       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0701 14:16:18.020656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0701 14:16:18.021351       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0701 14:16:18.242822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 14:16:18.242975       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0701 14:16:18.276477       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0701 14:16:18.276586       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0701 14:16:18.279444       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0701 14:16:18.279587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0701 14:16:18.761837       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 01 14:23:57 addons-929335 kubelet[1552]: I0701 14:23:57.691115    1552 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zrmm2\" (UniqueName: \"kubernetes.io/projected/af7151fd-575f-412e-84ee-483ab9498590-kube-api-access-zrmm2\") on node \"addons-929335\" DevicePath \"\""
	Jul 01 14:23:57 addons-929335 kubelet[1552]: I0701 14:23:57.691128    1552 reconciler_common.go:289] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/af7151fd-575f-412e-84ee-483ab9498590-debugfs\") on node \"addons-929335\" DevicePath \"\""
	Jul 01 14:23:57 addons-929335 kubelet[1552]: I0701 14:23:57.691137    1552 reconciler_common.go:289] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/af7151fd-575f-412e-84ee-483ab9498590-modules\") on node \"addons-929335\" DevicePath \"\""
	Jul 01 14:23:57 addons-929335 kubelet[1552]: I0701 14:23:57.786716    1552 scope.go:117] "RemoveContainer" containerID="8c9113c9643e879eb9692538390f9ea69d3fbd413925c088565df60cfc15318d"
	Jul 01 14:23:59 addons-929335 kubelet[1552]: I0701 14:23:59.693183    1552 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af7151fd-575f-412e-84ee-483ab9498590" path="/var/lib/kubelet/pods/af7151fd-575f-412e-84ee-483ab9498590/volumes"
	Jul 01 14:24:06 addons-929335 kubelet[1552]: I0701 14:24:06.691962    1552 scope.go:117] "RemoveContainer" containerID="80ba2de1a9989a8fab4559d13996447401349e0643fa3473c47c5c8409e8fba5"
	Jul 01 14:24:06 addons-929335 kubelet[1552]: I0701 14:24:06.805653    1552 scope.go:117] "RemoveContainer" containerID="80ba2de1a9989a8fab4559d13996447401349e0643fa3473c47c5c8409e8fba5"
	Jul 01 14:24:06 addons-929335 kubelet[1552]: I0701 14:24:06.806522    1552 scope.go:117] "RemoveContainer" containerID="728597a1e21a97f53cb877e66d083ee8ca94807f01c71316c909beab12f0612e"
	Jul 01 14:24:06 addons-929335 kubelet[1552]: E0701 14:24:06.806983    1552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 40s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-t4bld_default(df186926-03b3-4ae0-a031-91f5b7ef161d)\"" pod="default/hello-world-app-86c47465fc-t4bld" podUID="df186926-03b3-4ae0-a031-91f5b7ef161d"
	Jul 01 14:24:20 addons-929335 kubelet[1552]: I0701 14:24:20.103463    1552 scope.go:117] "RemoveContainer" containerID="e0fb5c63dd33a7a6f3dcb213d564e97734d713e68da938d2c2bf42bf310fc1c7"
	Jul 01 14:24:20 addons-929335 kubelet[1552]: I0701 14:24:20.127172    1552 scope.go:117] "RemoveContainer" containerID="9d1efded8009373eef4bf320fccbb1a5cae3f363fe8c9b8cf673bc79a48dfc9e"
	Jul 01 14:24:20 addons-929335 kubelet[1552]: I0701 14:24:20.692056    1552 scope.go:117] "RemoveContainer" containerID="728597a1e21a97f53cb877e66d083ee8ca94807f01c71316c909beab12f0612e"
	Jul 01 14:24:20 addons-929335 kubelet[1552]: E0701 14:24:20.692370    1552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 40s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-t4bld_default(df186926-03b3-4ae0-a031-91f5b7ef161d)\"" pod="default/hello-world-app-86c47465fc-t4bld" podUID="df186926-03b3-4ae0-a031-91f5b7ef161d"
	Jul 01 14:24:33 addons-929335 kubelet[1552]: I0701 14:24:33.692129    1552 scope.go:117] "RemoveContainer" containerID="728597a1e21a97f53cb877e66d083ee8ca94807f01c71316c909beab12f0612e"
	Jul 01 14:24:33 addons-929335 kubelet[1552]: E0701 14:24:33.692405    1552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 40s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-t4bld_default(df186926-03b3-4ae0-a031-91f5b7ef161d)\"" pod="default/hello-world-app-86c47465fc-t4bld" podUID="df186926-03b3-4ae0-a031-91f5b7ef161d"
	Jul 01 14:24:39 addons-929335 kubelet[1552]: I0701 14:24:39.858456    1552 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d044ed9e-3f07-4293-b20a-7710385bba17-tmp-dir\") pod \"d044ed9e-3f07-4293-b20a-7710385bba17\" (UID: \"d044ed9e-3f07-4293-b20a-7710385bba17\") "
	Jul 01 14:24:39 addons-929335 kubelet[1552]: I0701 14:24:39.858526    1552 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfght\" (UniqueName: \"kubernetes.io/projected/d044ed9e-3f07-4293-b20a-7710385bba17-kube-api-access-tfght\") pod \"d044ed9e-3f07-4293-b20a-7710385bba17\" (UID: \"d044ed9e-3f07-4293-b20a-7710385bba17\") "
	Jul 01 14:24:39 addons-929335 kubelet[1552]: I0701 14:24:39.859082    1552 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d044ed9e-3f07-4293-b20a-7710385bba17-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "d044ed9e-3f07-4293-b20a-7710385bba17" (UID: "d044ed9e-3f07-4293-b20a-7710385bba17"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 01 14:24:39 addons-929335 kubelet[1552]: I0701 14:24:39.863871    1552 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d044ed9e-3f07-4293-b20a-7710385bba17-kube-api-access-tfght" (OuterVolumeSpecName: "kube-api-access-tfght") pod "d044ed9e-3f07-4293-b20a-7710385bba17" (UID: "d044ed9e-3f07-4293-b20a-7710385bba17"). InnerVolumeSpecName "kube-api-access-tfght". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 01 14:24:39 addons-929335 kubelet[1552]: I0701 14:24:39.867927    1552 scope.go:117] "RemoveContainer" containerID="e9ea99849b7a1556a4bbcbfd89c532a93df40ecfb61c32791d87fcf8fcab151e"
	Jul 01 14:24:39 addons-929335 kubelet[1552]: I0701 14:24:39.899419    1552 scope.go:117] "RemoveContainer" containerID="e9ea99849b7a1556a4bbcbfd89c532a93df40ecfb61c32791d87fcf8fcab151e"
	Jul 01 14:24:39 addons-929335 kubelet[1552]: E0701 14:24:39.901479    1552 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9ea99849b7a1556a4bbcbfd89c532a93df40ecfb61c32791d87fcf8fcab151e\": container with ID starting with e9ea99849b7a1556a4bbcbfd89c532a93df40ecfb61c32791d87fcf8fcab151e not found: ID does not exist" containerID="e9ea99849b7a1556a4bbcbfd89c532a93df40ecfb61c32791d87fcf8fcab151e"
	Jul 01 14:24:39 addons-929335 kubelet[1552]: I0701 14:24:39.901519    1552 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9ea99849b7a1556a4bbcbfd89c532a93df40ecfb61c32791d87fcf8fcab151e"} err="failed to get container status \"e9ea99849b7a1556a4bbcbfd89c532a93df40ecfb61c32791d87fcf8fcab151e\": rpc error: code = NotFound desc = could not find container \"e9ea99849b7a1556a4bbcbfd89c532a93df40ecfb61c32791d87fcf8fcab151e\": container with ID starting with e9ea99849b7a1556a4bbcbfd89c532a93df40ecfb61c32791d87fcf8fcab151e not found: ID does not exist"
	Jul 01 14:24:39 addons-929335 kubelet[1552]: I0701 14:24:39.959346    1552 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d044ed9e-3f07-4293-b20a-7710385bba17-tmp-dir\") on node \"addons-929335\" DevicePath \"\""
	Jul 01 14:24:39 addons-929335 kubelet[1552]: I0701 14:24:39.959390    1552 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tfght\" (UniqueName: \"kubernetes.io/projected/d044ed9e-3f07-4293-b20a-7710385bba17-kube-api-access-tfght\") on node \"addons-929335\" DevicePath \"\""
	
	
	==> storage-provisioner [cd615c19161cd88f920f62f148cffc09c7eb70fe165441223629793a8598b765] <==
	I0701 14:17:09.154063       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0701 14:17:09.174307       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0701 14:17:09.174358       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0701 14:17:09.189621       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0701 14:17:09.189941       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-929335_467bfbe3-cc7a-4f06-8aff-686955b35647!
	I0701 14:17:09.191291       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8cb3561d-e28c-48ec-8580-912e5e2662a2", APIVersion:"v1", ResourceVersion:"897", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-929335_467bfbe3-cc7a-4f06-8aff-686955b35647 became leader
	I0701 14:17:09.290090       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-929335_467bfbe3-cc7a-4f06-8aff-686955b35647!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-929335 -n addons-929335
helpers_test.go:261: (dbg) Run:  kubectl --context addons-929335 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (310.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (128.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-767646 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0701 14:38:47.033045 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
E0701 14:39:02.765359 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-767646 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m3.535506026s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:589: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-767646       NotReady   control-plane   10m     v1.30.2
	ha-767646-m02   Ready      control-plane   10m     v1.30.2
	ha-767646-m04   Ready      <none>          8m14s   v1.30.2

                                                
                                                
-- /stdout --
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-767646
helpers_test.go:235: (dbg) docker inspect ha-767646:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b07ec69b038ff2dfbfa3d1835c65a1e7dca78356fdd4dcc2b404f75a589c6fb5",
	        "Created": "2024-07-01T14:29:17.651640068Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3774733,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-01T14:38:32.061639896Z",
	            "FinishedAt": "2024-07-01T14:38:31.27819778Z"
	        },
	        "Image": "sha256:59cf53f54b1bed0b432ebf08c6ac817bec062867b90e25c5452b8e7c3276a7ff",
	        "ResolvConfPath": "/var/lib/docker/containers/b07ec69b038ff2dfbfa3d1835c65a1e7dca78356fdd4dcc2b404f75a589c6fb5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b07ec69b038ff2dfbfa3d1835c65a1e7dca78356fdd4dcc2b404f75a589c6fb5/hostname",
	        "HostsPath": "/var/lib/docker/containers/b07ec69b038ff2dfbfa3d1835c65a1e7dca78356fdd4dcc2b404f75a589c6fb5/hosts",
	        "LogPath": "/var/lib/docker/containers/b07ec69b038ff2dfbfa3d1835c65a1e7dca78356fdd4dcc2b404f75a589c6fb5/b07ec69b038ff2dfbfa3d1835c65a1e7dca78356fdd4dcc2b404f75a589c6fb5-json.log",
	        "Name": "/ha-767646",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-767646:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-767646",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5ba397a4f7c8a340e03770d5d5bc8408c4a3042eb772ed8488f3aa007281feab-init/diff:/var/lib/docker/overlay2/c3139abb5cf1c83f6f12f6a5f4a9c8df468321ed41d6e455d104ebf4c7d8657d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ba397a4f7c8a340e03770d5d5bc8408c4a3042eb772ed8488f3aa007281feab/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ba397a4f7c8a340e03770d5d5bc8408c4a3042eb772ed8488f3aa007281feab/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ba397a4f7c8a340e03770d5d5bc8408c4a3042eb772ed8488f3aa007281feab/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-767646",
	                "Source": "/var/lib/docker/volumes/ha-767646/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-767646",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-767646",
	                "name.minikube.sigs.k8s.io": "ha-767646",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "22b269fab503e8e74edffb86e3dff1aea4d5eb5850dbb62af2926d4ba14bf336",
	            "SandboxKey": "/var/run/docker/netns/22b269fab503",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33960"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33961"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33964"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33962"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33963"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-767646": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "3cb95f1f57f24c54441fb29cd27552abb46a3f8b0e40112988100843b817b70a",
	                    "EndpointID": "dc553caf48952fd6961be67acebdaa1b417d92fdd7ae1f3ecd83dabf57a90cbe",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-767646",
	                        "b07ec69b038f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-767646 -n ha-767646
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ha-767646 logs -n 25: (2.061448892s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-767646 cp ha-767646-m03:/home/docker/cp-test.txt                              | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:33 UTC | 01 Jul 24 14:33 UTC |
	|         | ha-767646-m04:/home/docker/cp-test_ha-767646-m03_ha-767646-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-767646 ssh -n                                                                 | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:33 UTC | 01 Jul 24 14:33 UTC |
	|         | ha-767646-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767646 ssh -n ha-767646-m04 sudo cat                                          | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:33 UTC | 01 Jul 24 14:33 UTC |
	|         | /home/docker/cp-test_ha-767646-m03_ha-767646-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-767646 cp testdata/cp-test.txt                                                | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:33 UTC | 01 Jul 24 14:33 UTC |
	|         | ha-767646-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-767646 ssh -n                                                                 | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:33 UTC | 01 Jul 24 14:33 UTC |
	|         | ha-767646-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-767646 cp ha-767646-m04:/home/docker/cp-test.txt                              | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:33 UTC | 01 Jul 24 14:33 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1428980662/001/cp-test_ha-767646-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-767646 ssh -n                                                                 | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:33 UTC | 01 Jul 24 14:33 UTC |
	|         | ha-767646-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-767646 cp ha-767646-m04:/home/docker/cp-test.txt                              | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:33 UTC | 01 Jul 24 14:33 UTC |
	|         | ha-767646:/home/docker/cp-test_ha-767646-m04_ha-767646.txt                       |           |         |         |                     |                     |
	| ssh     | ha-767646 ssh -n                                                                 | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:33 UTC | 01 Jul 24 14:33 UTC |
	|         | ha-767646-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767646 ssh -n ha-767646 sudo cat                                              | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:33 UTC | 01 Jul 24 14:33 UTC |
	|         | /home/docker/cp-test_ha-767646-m04_ha-767646.txt                                 |           |         |         |                     |                     |
	| cp      | ha-767646 cp ha-767646-m04:/home/docker/cp-test.txt                              | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:33 UTC | 01 Jul 24 14:33 UTC |
	|         | ha-767646-m02:/home/docker/cp-test_ha-767646-m04_ha-767646-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-767646 ssh -n                                                                 | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:33 UTC | 01 Jul 24 14:33 UTC |
	|         | ha-767646-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767646 ssh -n ha-767646-m02 sudo cat                                          | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:33 UTC | 01 Jul 24 14:33 UTC |
	|         | /home/docker/cp-test_ha-767646-m04_ha-767646-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-767646 cp ha-767646-m04:/home/docker/cp-test.txt                              | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:33 UTC | 01 Jul 24 14:33 UTC |
	|         | ha-767646-m03:/home/docker/cp-test_ha-767646-m04_ha-767646-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-767646 ssh -n                                                                 | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:33 UTC | 01 Jul 24 14:33 UTC |
	|         | ha-767646-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-767646 ssh -n ha-767646-m03 sudo cat                                          | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:33 UTC | 01 Jul 24 14:33 UTC |
	|         | /home/docker/cp-test_ha-767646-m04_ha-767646-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-767646 node stop m02 -v=7                                                     | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:33 UTC | 01 Jul 24 14:33 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-767646 node start m02 -v=7                                                    | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:33 UTC | 01 Jul 24 14:34 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-767646 -v=7                                                           | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-767646 -v=7                                                                | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:34 UTC | 01 Jul 24 14:34 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-767646 --wait=true -v=7                                                    | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:34 UTC | 01 Jul 24 14:37 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-767646                                                                | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:37 UTC |                     |
	| node    | ha-767646 node delete m03 -v=7                                                   | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:37 UTC | 01 Jul 24 14:37 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-767646 stop -v=7                                                              | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:37 UTC | 01 Jul 24 14:38 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-767646 --wait=true                                                         | ha-767646 | jenkins | v1.33.1 | 01 Jul 24 14:38 UTC | 01 Jul 24 14:40 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/01 14:38:31
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 14:38:31.694052 3774537 out.go:291] Setting OutFile to fd 1 ...
	I0701 14:38:31.694239 3774537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:38:31.694251 3774537 out.go:304] Setting ErrFile to fd 2...
	I0701 14:38:31.694257 3774537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:38:31.694533 3774537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-3708336/.minikube/bin
	I0701 14:38:31.694947 3774537 out.go:298] Setting JSON to false
	I0701 14:38:31.695903 3774537 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":166863,"bootTime":1719677849,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0701 14:38:31.695979 3774537 start.go:139] virtualization:  
	I0701 14:38:31.699105 3774537 out.go:177] * [ha-767646] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0701 14:38:31.702290 3774537 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 14:38:31.702387 3774537 notify.go:220] Checking for updates...
	I0701 14:38:31.706902 3774537 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 14:38:31.709186 3774537 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19166-3708336/kubeconfig
	I0701 14:38:31.711598 3774537 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-3708336/.minikube
	I0701 14:38:31.714173 3774537 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0701 14:38:31.716510 3774537 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 14:38:31.719394 3774537 config.go:182] Loaded profile config "ha-767646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:38:31.719935 3774537 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 14:38:31.744534 3774537 docker.go:122] docker version: linux-27.0.3:Docker Engine - Community
	I0701 14:38:31.744628 3774537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 14:38:31.812699 3774537 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:41 SystemTime:2024-07-01 14:38:31.802932655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0701 14:38:31.812808 3774537 docker.go:295] overlay module found
	I0701 14:38:31.815540 3774537 out.go:177] * Using the docker driver based on existing profile
	I0701 14:38:31.817957 3774537 start.go:297] selected driver: docker
	I0701 14:38:31.817975 3774537 start.go:901] validating driver "docker" against &{Name:ha-767646 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-767646 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kub
evirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 14:38:31.818124 3774537 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 14:38:31.818227 3774537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 14:38:31.881727 3774537 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:41 SystemTime:2024-07-01 14:38:31.872000386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0701 14:38:31.882139 3774537 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 14:38:31.882167 3774537 cni.go:84] Creating CNI manager for ""
	I0701 14:38:31.882178 3774537 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0701 14:38:31.882224 3774537 start.go:340] cluster config:
	{Name:ha-767646 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-767646 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device
-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval
:1m0s}
	I0701 14:38:31.886540 3774537 out.go:177] * Starting "ha-767646" primary control-plane node in "ha-767646" cluster
	I0701 14:38:31.889129 3774537 cache.go:121] Beginning downloading kic base image for docker with crio
	I0701 14:38:31.891719 3774537 out.go:177] * Pulling base image v0.0.44-1719413016-19142 ...
	I0701 14:38:31.894209 3774537 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0701 14:38:31.894240 3774537 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local docker daemon
	I0701 14:38:31.894257 3774537 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4
	I0701 14:38:31.894265 3774537 cache.go:56] Caching tarball of preloaded images
	I0701 14:38:31.894338 3774537 preload.go:173] Found /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0701 14:38:31.894347 3774537 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0701 14:38:31.894490 3774537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/config.json ...
	I0701 14:38:31.919007 3774537 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local docker daemon, skipping pull
	I0701 14:38:31.919034 3774537 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d exists in daemon, skipping load
	I0701 14:38:31.919057 3774537 cache.go:194] Successfully downloaded all kic artifacts
	I0701 14:38:31.919086 3774537 start.go:360] acquireMachinesLock for ha-767646: {Name:mk38461ba0297add04a7aaf6cb2e7496f402f5b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 14:38:31.919180 3774537 start.go:364] duration metric: took 57.396µs to acquireMachinesLock for "ha-767646"
	I0701 14:38:31.919206 3774537 start.go:96] Skipping create...Using existing machine configuration
	I0701 14:38:31.919214 3774537 fix.go:54] fixHost starting: 
	I0701 14:38:31.919497 3774537 cli_runner.go:164] Run: docker container inspect ha-767646 --format={{.State.Status}}
	I0701 14:38:31.935642 3774537 fix.go:112] recreateIfNeeded on ha-767646: state=Stopped err=<nil>
	W0701 14:38:31.935674 3774537 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 14:38:31.940284 3774537 out.go:177] * Restarting existing docker container for "ha-767646" ...
	I0701 14:38:31.942825 3774537 cli_runner.go:164] Run: docker start ha-767646
	I0701 14:38:32.228302 3774537 cli_runner.go:164] Run: docker container inspect ha-767646 --format={{.State.Status}}
	I0701 14:38:32.249802 3774537 kic.go:430] container "ha-767646" state is running.
	I0701 14:38:32.250196 3774537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-767646
	I0701 14:38:32.271578 3774537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/config.json ...
	I0701 14:38:32.271824 3774537 machine.go:94] provisionDockerMachine start ...
	I0701 14:38:32.271882 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646
	I0701 14:38:32.299100 3774537 main.go:141] libmachine: Using SSH client type: native
	I0701 14:38:32.299408 3774537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 33960 <nil> <nil>}
	I0701 14:38:32.299417 3774537 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 14:38:32.300095 3774537 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0701 14:38:35.440637 3774537 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767646
	
	I0701 14:38:35.440664 3774537 ubuntu.go:169] provisioning hostname "ha-767646"
	I0701 14:38:35.440726 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646
	I0701 14:38:35.456932 3774537 main.go:141] libmachine: Using SSH client type: native
	I0701 14:38:35.457216 3774537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 33960 <nil> <nil>}
	I0701 14:38:35.457230 3774537 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-767646 && echo "ha-767646" | sudo tee /etc/hostname
	I0701 14:38:35.609468 3774537 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767646
	
	I0701 14:38:35.609576 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646
	I0701 14:38:35.627056 3774537 main.go:141] libmachine: Using SSH client type: native
	I0701 14:38:35.627323 3774537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 33960 <nil> <nil>}
	I0701 14:38:35.627347 3774537 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-767646' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-767646/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-767646' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 14:38:35.773142 3774537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 14:38:35.773166 3774537 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19166-3708336/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-3708336/.minikube}
	I0701 14:38:35.773204 3774537 ubuntu.go:177] setting up certificates
	I0701 14:38:35.773219 3774537 provision.go:84] configureAuth start
	I0701 14:38:35.773283 3774537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-767646
	I0701 14:38:35.790470 3774537 provision.go:143] copyHostCerts
	I0701 14:38:35.790514 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-3708336/.minikube/key.pem
	I0701 14:38:35.790548 3774537 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-3708336/.minikube/key.pem, removing ...
	I0701 14:38:35.790557 3774537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-3708336/.minikube/key.pem
	I0701 14:38:35.790634 3774537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-3708336/.minikube/key.pem (1675 bytes)
	I0701 14:38:35.790733 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.pem
	I0701 14:38:35.790755 3774537 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.pem, removing ...
	I0701 14:38:35.790760 3774537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.pem
	I0701 14:38:35.790795 3774537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.pem (1082 bytes)
	I0701 14:38:35.790850 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-3708336/.minikube/cert.pem
	I0701 14:38:35.790870 3774537 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-3708336/.minikube/cert.pem, removing ...
	I0701 14:38:35.790877 3774537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-3708336/.minikube/cert.pem
	I0701 14:38:35.790906 3774537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-3708336/.minikube/cert.pem (1123 bytes)
	I0701 14:38:35.790964 3774537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca-key.pem org=jenkins.ha-767646 san=[127.0.0.1 192.168.49.2 ha-767646 localhost minikube]
	I0701 14:38:36.036127 3774537 provision.go:177] copyRemoteCerts
	I0701 14:38:36.036202 3774537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 14:38:36.036250 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646
	I0701 14:38:36.055461 3774537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33960 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/ha-767646/id_rsa Username:docker}
	I0701 14:38:36.153425 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 14:38:36.153501 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 14:38:36.177197 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 14:38:36.177299 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0701 14:38:36.200446 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 14:38:36.200525 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0701 14:38:36.223307 3774537 provision.go:87] duration metric: took 450.0733ms to configureAuth
	I0701 14:38:36.223335 3774537 ubuntu.go:193] setting minikube options for container-runtime
	I0701 14:38:36.223602 3774537 config.go:182] Loaded profile config "ha-767646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:38:36.223706 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646
	I0701 14:38:36.240263 3774537 main.go:141] libmachine: Using SSH client type: native
	I0701 14:38:36.240510 3774537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 33960 <nil> <nil>}
	I0701 14:38:36.240529 3774537 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0701 14:38:36.678268 3774537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0701 14:38:36.678291 3774537 machine.go:97] duration metric: took 4.406456366s to provisionDockerMachine
	I0701 14:38:36.678303 3774537 start.go:293] postStartSetup for "ha-767646" (driver="docker")
	I0701 14:38:36.678314 3774537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 14:38:36.678396 3774537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 14:38:36.678440 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646
	I0701 14:38:36.700426 3774537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33960 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/ha-767646/id_rsa Username:docker}
	I0701 14:38:36.798319 3774537 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 14:38:36.801375 3774537 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 14:38:36.801408 3774537 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 14:38:36.801435 3774537 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 14:38:36.801447 3774537 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0701 14:38:36.801457 3774537 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-3708336/.minikube/addons for local assets ...
	I0701 14:38:36.801523 3774537 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-3708336/.minikube/files for local assets ...
	I0701 14:38:36.801603 3774537 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem -> 37137252.pem in /etc/ssl/certs
	I0701 14:38:36.801612 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem -> /etc/ssl/certs/37137252.pem
	I0701 14:38:36.801722 3774537 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 14:38:36.810046 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem --> /etc/ssl/certs/37137252.pem (1708 bytes)
	I0701 14:38:36.833681 3774537 start.go:296] duration metric: took 155.363349ms for postStartSetup
	I0701 14:38:36.833762 3774537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 14:38:36.833836 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646
	I0701 14:38:36.850474 3774537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33960 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/ha-767646/id_rsa Username:docker}
	I0701 14:38:36.945915 3774537 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 14:38:36.950206 3774537 fix.go:56] duration metric: took 5.030986232s for fixHost
	I0701 14:38:36.950234 3774537 start.go:83] releasing machines lock for "ha-767646", held for 5.031040083s
	I0701 14:38:36.950306 3774537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-767646
	I0701 14:38:36.966732 3774537 ssh_runner.go:195] Run: cat /version.json
	I0701 14:38:36.966790 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646
	I0701 14:38:36.966803 3774537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 14:38:36.966855 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646
	I0701 14:38:36.984463 3774537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33960 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/ha-767646/id_rsa Username:docker}
	I0701 14:38:36.985322 3774537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33960 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/ha-767646/id_rsa Username:docker}
	I0701 14:38:37.205884 3774537 ssh_runner.go:195] Run: systemctl --version
	I0701 14:38:37.210216 3774537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0701 14:38:37.350426 3774537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 14:38:37.354837 3774537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 14:38:37.364243 3774537 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0701 14:38:37.364383 3774537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 14:38:37.377310 3774537 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0701 14:38:37.377389 3774537 start.go:494] detecting cgroup driver to use...
	I0701 14:38:37.377434 3774537 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0701 14:38:37.377502 3774537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 14:38:37.389622 3774537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 14:38:37.401813 3774537 docker.go:217] disabling cri-docker service (if available) ...
	I0701 14:38:37.401911 3774537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0701 14:38:37.415051 3774537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0701 14:38:37.427252 3774537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0701 14:38:37.521583 3774537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0701 14:38:37.612779 3774537 docker.go:233] disabling docker service ...
	I0701 14:38:37.612898 3774537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 14:38:37.625962 3774537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 14:38:37.637325 3774537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 14:38:37.726205 3774537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 14:38:37.820790 3774537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 14:38:37.832275 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 14:38:37.848326 3774537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0701 14:38:37.848399 3774537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:38:37.858314 3774537 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0701 14:38:37.858385 3774537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:38:37.867737 3774537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:38:37.878296 3774537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:38:37.888053 3774537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 14:38:37.897080 3774537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:38:37.906907 3774537 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:38:37.916327 3774537 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:38:37.925895 3774537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 14:38:37.934186 3774537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 14:38:37.942406 3774537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 14:38:38.030932 3774537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0701 14:38:38.146749 3774537 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0701 14:38:38.146819 3774537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0701 14:38:38.150427 3774537 start.go:562] Will wait 60s for crictl version
	I0701 14:38:38.150490 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:38:38.153795 3774537 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 14:38:38.190424 3774537 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0701 14:38:38.190626 3774537 ssh_runner.go:195] Run: crio --version
	I0701 14:38:38.230997 3774537 ssh_runner.go:195] Run: crio --version
	I0701 14:38:38.275186 3774537 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.24.6 ...
	I0701 14:38:38.277425 3774537 cli_runner.go:164] Run: docker network inspect ha-767646 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 14:38:38.293230 3774537 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0701 14:38:38.296980 3774537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 14:38:38.308522 3774537 kubeadm.go:877] updating cluster {Name:ha-767646 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-767646 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false l
ogviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0701 14:38:38.308691 3774537 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0701 14:38:38.308761 3774537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 14:38:38.358794 3774537 crio.go:514] all images are preloaded for cri-o runtime.
	I0701 14:38:38.358816 3774537 crio.go:433] Images already preloaded, skipping extraction
	I0701 14:38:38.358872 3774537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 14:38:38.395326 3774537 crio.go:514] all images are preloaded for cri-o runtime.
	I0701 14:38:38.395348 3774537 cache_images.go:84] Images are preloaded, skipping loading
	I0701 14:38:38.395360 3774537 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.2 crio true true} ...
	I0701 14:38:38.395468 3774537 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-767646 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-767646 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 14:38:38.395574 3774537 ssh_runner.go:195] Run: crio config
	I0701 14:38:38.445369 3774537 cni.go:84] Creating CNI manager for ""
	I0701 14:38:38.445394 3774537 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0701 14:38:38.445404 3774537 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0701 14:38:38.445426 3774537 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-767646 NodeName:ha-767646 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0701 14:38:38.445570 3774537 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-767646"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 14:38:38.445592 3774537 kube-vip.go:115] generating kube-vip config ...
	I0701 14:38:38.445645 3774537 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0701 14:38:38.458526 3774537 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0701 14:38:38.458651 3774537 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0701 14:38:38.458743 3774537 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 14:38:38.467584 3774537 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 14:38:38.467653 3774537 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0701 14:38:38.477614 3774537 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0701 14:38:38.496014 3774537 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 14:38:38.515128 3774537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0701 14:38:38.533660 3774537 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0701 14:38:38.551573 3774537 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0701 14:38:38.555250 3774537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 14:38:38.566078 3774537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 14:38:38.656661 3774537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 14:38:38.670734 3774537 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646 for IP: 192.168.49.2
	I0701 14:38:38.670753 3774537 certs.go:194] generating shared ca certs ...
	I0701 14:38:38.670769 3774537 certs.go:226] acquiring lock for ca certs: {Name:mkef61a10d340f62d4856e4c226678a7bd970ee7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:38:38.670903 3774537 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.key
	I0701 14:38:38.670949 3774537 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.key
	I0701 14:38:38.670960 3774537 certs.go:256] generating profile certs ...
	I0701 14:38:38.671051 3774537 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/client.key
	I0701 14:38:38.671081 3774537 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/apiserver.key.a57cedbb
	I0701 14:38:38.671102 3774537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/apiserver.crt.a57cedbb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0701 14:38:38.943614 3774537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/apiserver.crt.a57cedbb ...
	I0701 14:38:38.943694 3774537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/apiserver.crt.a57cedbb: {Name:mkb47aa0f2ce3f09b66c9d83e3c74c5831ab6b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:38:38.943927 3774537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/apiserver.key.a57cedbb ...
	I0701 14:38:38.943966 3774537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/apiserver.key.a57cedbb: {Name:mk0234651bc4400527305a5a5510ee6ca9906c4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:38:38.944113 3774537 certs.go:381] copying /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/apiserver.crt.a57cedbb -> /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/apiserver.crt
	I0701 14:38:38.944313 3774537 certs.go:385] copying /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/apiserver.key.a57cedbb -> /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/apiserver.key
	I0701 14:38:38.944501 3774537 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/proxy-client.key
	I0701 14:38:38.944537 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 14:38:38.944572 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 14:38:38.944616 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 14:38:38.944653 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 14:38:38.944684 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0701 14:38:38.944733 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0701 14:38:38.944769 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0701 14:38:38.944798 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0701 14:38:38.944910 3774537 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/3713725.pem (1338 bytes)
	W0701 14:38:38.944981 3774537 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/3713725_empty.pem, impossibly tiny 0 bytes
	I0701 14:38:38.945048 3774537 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 14:38:38.945101 3774537 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem (1082 bytes)
	I0701 14:38:38.945158 3774537 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem (1123 bytes)
	I0701 14:38:38.945205 3774537 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem (1675 bytes)
	I0701 14:38:38.945290 3774537 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem (1708 bytes)
	I0701 14:38:38.945348 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 14:38:38.945396 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/3713725.pem -> /usr/share/ca-certificates/3713725.pem
	I0701 14:38:38.945429 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem -> /usr/share/ca-certificates/37137252.pem
	I0701 14:38:38.946023 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 14:38:38.971652 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 14:38:38.998102 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 14:38:39.025802 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 14:38:39.051436 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0701 14:38:39.076372 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 14:38:39.101949 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 14:38:39.126995 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 14:38:39.152525 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 14:38:39.176936 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/3713725.pem --> /usr/share/ca-certificates/3713725.pem (1338 bytes)
	I0701 14:38:39.201822 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem --> /usr/share/ca-certificates/37137252.pem (1708 bytes)
	I0701 14:38:39.226726 3774537 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 14:38:39.244788 3774537 ssh_runner.go:195] Run: openssl version
	I0701 14:38:39.250130 3774537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 14:38:39.259988 3774537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 14:38:39.263652 3774537 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 14:16 /usr/share/ca-certificates/minikubeCA.pem
	I0701 14:38:39.263767 3774537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 14:38:39.270763 3774537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 14:38:39.279688 3774537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3713725.pem && ln -fs /usr/share/ca-certificates/3713725.pem /etc/ssl/certs/3713725.pem"
	I0701 14:38:39.289215 3774537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3713725.pem
	I0701 14:38:39.292676 3774537 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 14:25 /usr/share/ca-certificates/3713725.pem
	I0701 14:38:39.292739 3774537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3713725.pem
	I0701 14:38:39.299584 3774537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3713725.pem /etc/ssl/certs/51391683.0"
	I0701 14:38:39.308564 3774537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/37137252.pem && ln -fs /usr/share/ca-certificates/37137252.pem /etc/ssl/certs/37137252.pem"
	I0701 14:38:39.318132 3774537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/37137252.pem
	I0701 14:38:39.321684 3774537 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 14:25 /usr/share/ca-certificates/37137252.pem
	I0701 14:38:39.321751 3774537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/37137252.pem
	I0701 14:38:39.328684 3774537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/37137252.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 14:38:39.337448 3774537 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 14:38:39.340916 3774537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 14:38:39.347665 3774537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 14:38:39.354941 3774537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 14:38:39.361890 3774537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 14:38:39.368482 3774537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 14:38:39.375166 3774537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 14:38:39.382114 3774537 kubeadm.go:391] StartCluster: {Name:ha-767646 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-767646 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logv
iewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 14:38:39.382243 3774537 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0701 14:38:39.382343 3774537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 14:38:39.419199 3774537 cri.go:89] found id: ""
	I0701 14:38:39.419265 3774537 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0701 14:38:39.427930 3774537 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0701 14:38:39.427991 3774537 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0701 14:38:39.428002 3774537 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0701 14:38:39.428056 3774537 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 14:38:39.436552 3774537 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 14:38:39.437091 3774537 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-767646" does not appear in /home/jenkins/minikube-integration/19166-3708336/kubeconfig
	I0701 14:38:39.437252 3774537 kubeconfig.go:62] /home/jenkins/minikube-integration/19166-3708336/kubeconfig needs updating (will repair): [kubeconfig missing "ha-767646" cluster setting kubeconfig missing "ha-767646" context setting]
	I0701 14:38:39.437540 3774537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/kubeconfig: {Name:mk4d5838a81c57a1d9ec9a509328664588dd34aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:38:39.437935 3774537 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-3708336/kubeconfig
	I0701 14:38:39.438193 3774537 kapi.go:59] client config for ha-767646: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/client.key", CAFile:"/home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x179ece0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0701 14:38:39.438980 3774537 cert_rotation.go:137] Starting client certificate rotation controller
	I0701 14:38:39.439684 3774537 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 14:38:39.448421 3774537 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.49.2
	I0701 14:38:39.448482 3774537 kubeadm.go:591] duration metric: took 20.472604ms to restartPrimaryControlPlane
	I0701 14:38:39.448497 3774537 kubeadm.go:393] duration metric: took 66.39175ms to StartCluster
	I0701 14:38:39.448513 3774537 settings.go:142] acquiring lock: {Name:mke9008d6920f4be65eddeda5d60c738ed3823ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:38:39.448571 3774537 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19166-3708336/kubeconfig
	I0701 14:38:39.449215 3774537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/kubeconfig: {Name:mk4d5838a81c57a1d9ec9a509328664588dd34aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:38:39.449428 3774537 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0701 14:38:39.449451 3774537 start.go:240] waiting for startup goroutines ...
	I0701 14:38:39.449458 3774537 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0701 14:38:39.449876 3774537 config.go:182] Loaded profile config "ha-767646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:38:39.452745 3774537 out.go:177] * Enabled addons: 
	I0701 14:38:39.454963 3774537 addons.go:510] duration metric: took 5.496325ms for enable addons: enabled=[]
	I0701 14:38:39.455026 3774537 start.go:245] waiting for cluster config update ...
	I0701 14:38:39.455045 3774537 start.go:254] writing updated cluster config ...
	I0701 14:38:39.457597 3774537 out.go:177] 
	I0701 14:38:39.459918 3774537 config.go:182] Loaded profile config "ha-767646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:38:39.460110 3774537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/config.json ...
	I0701 14:38:39.462333 3774537 out.go:177] * Starting "ha-767646-m02" control-plane node in "ha-767646" cluster
	I0701 14:38:39.464122 3774537 cache.go:121] Beginning downloading kic base image for docker with crio
	I0701 14:38:39.465862 3774537 out.go:177] * Pulling base image v0.0.44-1719413016-19142 ...
	I0701 14:38:39.467647 3774537 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local docker daemon
	I0701 14:38:39.467648 3774537 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0701 14:38:39.467714 3774537 cache.go:56] Caching tarball of preloaded images
	I0701 14:38:39.467797 3774537 preload.go:173] Found /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0701 14:38:39.467807 3774537 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0701 14:38:39.467971 3774537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/config.json ...
	I0701 14:38:39.481544 3774537 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local docker daemon, skipping pull
	I0701 14:38:39.481568 3774537 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d exists in daemon, skipping load
	I0701 14:38:39.481587 3774537 cache.go:194] Successfully downloaded all kic artifacts
	I0701 14:38:39.481616 3774537 start.go:360] acquireMachinesLock for ha-767646-m02: {Name:mkc509acd41b47a7511e2e75e6c4a05e65937914 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 14:38:39.481702 3774537 start.go:364] duration metric: took 50.347µs to acquireMachinesLock for "ha-767646-m02"
	I0701 14:38:39.481727 3774537 start.go:96] Skipping create...Using existing machine configuration
	I0701 14:38:39.481732 3774537 fix.go:54] fixHost starting: m02
	I0701 14:38:39.482030 3774537 cli_runner.go:164] Run: docker container inspect ha-767646-m02 --format={{.State.Status}}
	I0701 14:38:39.498597 3774537 fix.go:112] recreateIfNeeded on ha-767646-m02: state=Stopped err=<nil>
	W0701 14:38:39.498626 3774537 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 14:38:39.501300 3774537 out.go:177] * Restarting existing docker container for "ha-767646-m02" ...
	I0701 14:38:39.503540 3774537 cli_runner.go:164] Run: docker start ha-767646-m02
	I0701 14:38:39.778659 3774537 cli_runner.go:164] Run: docker container inspect ha-767646-m02 --format={{.State.Status}}
	I0701 14:38:39.801183 3774537 kic.go:430] container "ha-767646-m02" state is running.
	I0701 14:38:39.801581 3774537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-767646-m02
	I0701 14:38:39.823423 3774537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/config.json ...
	I0701 14:38:39.823677 3774537 machine.go:94] provisionDockerMachine start ...
	I0701 14:38:39.823737 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646-m02
	I0701 14:38:39.845377 3774537 main.go:141] libmachine: Using SSH client type: native
	I0701 14:38:39.845615 3774537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 33965 <nil> <nil>}
	I0701 14:38:39.845624 3774537 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 14:38:39.846672 3774537 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0701 14:38:43.034033 3774537 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767646-m02
	
	I0701 14:38:43.034102 3774537 ubuntu.go:169] provisioning hostname "ha-767646-m02"
	I0701 14:38:43.034253 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646-m02
	I0701 14:38:43.062644 3774537 main.go:141] libmachine: Using SSH client type: native
	I0701 14:38:43.062889 3774537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 33965 <nil> <nil>}
	I0701 14:38:43.062905 3774537 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-767646-m02 && echo "ha-767646-m02" | sudo tee /etc/hostname
	I0701 14:38:43.293670 3774537 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767646-m02
	
	I0701 14:38:43.293830 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646-m02
	I0701 14:38:43.325109 3774537 main.go:141] libmachine: Using SSH client type: native
	I0701 14:38:43.325360 3774537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 33965 <nil> <nil>}
	I0701 14:38:43.325376 3774537 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-767646-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-767646-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-767646-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 14:38:43.521905 3774537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 14:38:43.521989 3774537 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19166-3708336/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-3708336/.minikube}
	I0701 14:38:43.522022 3774537 ubuntu.go:177] setting up certificates
	I0701 14:38:43.522062 3774537 provision.go:84] configureAuth start
	I0701 14:38:43.522186 3774537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-767646-m02
	I0701 14:38:43.543342 3774537 provision.go:143] copyHostCerts
	I0701 14:38:43.543381 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.pem
	I0701 14:38:43.543411 3774537 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.pem, removing ...
	I0701 14:38:43.543417 3774537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.pem
	I0701 14:38:43.543497 3774537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.pem (1082 bytes)
	I0701 14:38:43.543583 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-3708336/.minikube/cert.pem
	I0701 14:38:43.543602 3774537 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-3708336/.minikube/cert.pem, removing ...
	I0701 14:38:43.543607 3774537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-3708336/.minikube/cert.pem
	I0701 14:38:43.543634 3774537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-3708336/.minikube/cert.pem (1123 bytes)
	I0701 14:38:43.543675 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-3708336/.minikube/key.pem
	I0701 14:38:43.543690 3774537 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-3708336/.minikube/key.pem, removing ...
	I0701 14:38:43.543694 3774537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-3708336/.minikube/key.pem
	I0701 14:38:43.543720 3774537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-3708336/.minikube/key.pem (1675 bytes)
	I0701 14:38:43.543765 3774537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca-key.pem org=jenkins.ha-767646-m02 san=[127.0.0.1 192.168.49.3 ha-767646-m02 localhost minikube]
	I0701 14:38:44.591619 3774537 provision.go:177] copyRemoteCerts
	I0701 14:38:44.591691 3774537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 14:38:44.591733 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646-m02
	I0701 14:38:44.614994 3774537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33965 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/ha-767646-m02/id_rsa Username:docker}
	I0701 14:38:44.726423 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 14:38:44.726485 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0701 14:38:44.757878 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 14:38:44.757938 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0701 14:38:44.784814 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 14:38:44.784872 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 14:38:44.811338 3774537 provision.go:87] duration metric: took 1.289246654s to configureAuth
	I0701 14:38:44.811415 3774537 ubuntu.go:193] setting minikube options for container-runtime
	I0701 14:38:44.811708 3774537 config.go:182] Loaded profile config "ha-767646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:38:44.811857 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646-m02
	I0701 14:38:44.828932 3774537 main.go:141] libmachine: Using SSH client type: native
	I0701 14:38:44.829209 3774537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 33965 <nil> <nil>}
	I0701 14:38:44.829226 3774537 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0701 14:38:45.233792 3774537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0701 14:38:45.233923 3774537 machine.go:97] duration metric: took 5.410236831s to provisionDockerMachine
	I0701 14:38:45.233953 3774537 start.go:293] postStartSetup for "ha-767646-m02" (driver="docker")
	I0701 14:38:45.233994 3774537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 14:38:45.234157 3774537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 14:38:45.234236 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646-m02
	I0701 14:38:45.254682 3774537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33965 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/ha-767646-m02/id_rsa Username:docker}
	I0701 14:38:45.363009 3774537 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 14:38:45.370239 3774537 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 14:38:45.370315 3774537 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 14:38:45.370334 3774537 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 14:38:45.370347 3774537 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0701 14:38:45.370358 3774537 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-3708336/.minikube/addons for local assets ...
	I0701 14:38:45.370430 3774537 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-3708336/.minikube/files for local assets ...
	I0701 14:38:45.370529 3774537 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem -> 37137252.pem in /etc/ssl/certs
	I0701 14:38:45.370541 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem -> /etc/ssl/certs/37137252.pem
	I0701 14:38:45.370661 3774537 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 14:38:45.380823 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem --> /etc/ssl/certs/37137252.pem (1708 bytes)
	I0701 14:38:45.417452 3774537 start.go:296] duration metric: took 183.453874ms for postStartSetup
	I0701 14:38:45.417536 3774537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 14:38:45.417581 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646-m02
	I0701 14:38:45.443789 3774537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33965 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/ha-767646-m02/id_rsa Username:docker}
	I0701 14:38:45.538714 3774537 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 14:38:45.547919 3774537 fix.go:56] duration metric: took 6.066178028s for fixHost
	I0701 14:38:45.547953 3774537 start.go:83] releasing machines lock for "ha-767646-m02", held for 6.066238132s
	I0701 14:38:45.548025 3774537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-767646-m02
	I0701 14:38:45.575789 3774537 out.go:177] * Found network options:
	I0701 14:38:45.578466 3774537 out.go:177]   - NO_PROXY=192.168.49.2
	W0701 14:38:45.581303 3774537 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 14:38:45.581350 3774537 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 14:38:45.581418 3774537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0701 14:38:45.581465 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646-m02
	I0701 14:38:45.581697 3774537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 14:38:45.581747 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646-m02
	I0701 14:38:45.622236 3774537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33965 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/ha-767646-m02/id_rsa Username:docker}
	I0701 14:38:45.624972 3774537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33965 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/ha-767646-m02/id_rsa Username:docker}
	I0701 14:38:46.248097 3774537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 14:38:46.263155 3774537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 14:38:46.287446 3774537 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0701 14:38:46.287526 3774537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 14:38:46.317621 3774537 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0701 14:38:46.317645 3774537 start.go:494] detecting cgroup driver to use...
	I0701 14:38:46.317677 3774537 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0701 14:38:46.317730 3774537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 14:38:46.381717 3774537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 14:38:46.461863 3774537 docker.go:217] disabling cri-docker service (if available) ...
	I0701 14:38:46.461972 3774537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0701 14:38:46.518062 3774537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0701 14:38:46.557365 3774537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0701 14:38:46.930717 3774537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0701 14:38:47.222337 3774537 docker.go:233] disabling docker service ...
	I0701 14:38:47.222454 3774537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 14:38:47.275942 3774537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 14:38:47.329688 3774537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 14:38:47.593900 3774537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 14:38:47.877373 3774537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 14:38:47.894964 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 14:38:47.954459 3774537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0701 14:38:47.954545 3774537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:38:47.986151 3774537 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0701 14:38:47.986234 3774537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:38:48.006456 3774537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:38:48.033469 3774537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:38:48.061679 3774537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 14:38:48.086423 3774537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:38:48.129622 3774537 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:38:48.164335 3774537 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:38:48.218543 3774537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 14:38:48.252436 3774537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 14:38:48.311024 3774537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 14:38:48.576413 3774537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0701 14:38:49.281468 3774537 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0701 14:38:49.281554 3774537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0701 14:38:49.285591 3774537 start.go:562] Will wait 60s for crictl version
	I0701 14:38:49.285686 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:38:49.289473 3774537 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 14:38:49.371924 3774537 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0701 14:38:49.372081 3774537 ssh_runner.go:195] Run: crio --version
	I0701 14:38:49.457189 3774537 ssh_runner.go:195] Run: crio --version
	I0701 14:38:49.527040 3774537 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.24.6 ...
	I0701 14:38:49.529150 3774537 out.go:177]   - env NO_PROXY=192.168.49.2
	I0701 14:38:49.531126 3774537 cli_runner.go:164] Run: docker network inspect ha-767646 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 14:38:49.556461 3774537 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0701 14:38:49.560441 3774537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 14:38:49.585330 3774537 mustload.go:65] Loading cluster: ha-767646
	I0701 14:38:49.585576 3774537 config.go:182] Loaded profile config "ha-767646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:38:49.585841 3774537 cli_runner.go:164] Run: docker container inspect ha-767646 --format={{.State.Status}}
	I0701 14:38:49.611770 3774537 host.go:66] Checking if "ha-767646" exists ...
	I0701 14:38:49.612060 3774537 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646 for IP: 192.168.49.3
	I0701 14:38:49.612068 3774537 certs.go:194] generating shared ca certs ...
	I0701 14:38:49.612083 3774537 certs.go:226] acquiring lock for ca certs: {Name:mkef61a10d340f62d4856e4c226678a7bd970ee7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:38:49.612191 3774537 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.key
	I0701 14:38:49.612233 3774537 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.key
	I0701 14:38:49.612240 3774537 certs.go:256] generating profile certs ...
	I0701 14:38:49.612315 3774537 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/client.key
	I0701 14:38:49.612378 3774537 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/apiserver.key.bac6a5cf
	I0701 14:38:49.612416 3774537 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/proxy-client.key
	I0701 14:38:49.612425 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 14:38:49.612437 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 14:38:49.612448 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 14:38:49.612459 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 14:38:49.612469 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0701 14:38:49.612481 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0701 14:38:49.612492 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0701 14:38:49.612502 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0701 14:38:49.612551 3774537 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/3713725.pem (1338 bytes)
	W0701 14:38:49.612580 3774537 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/3713725_empty.pem, impossibly tiny 0 bytes
	I0701 14:38:49.612588 3774537 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 14:38:49.612613 3774537 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem (1082 bytes)
	I0701 14:38:49.612636 3774537 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem (1123 bytes)
	I0701 14:38:49.612658 3774537 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem (1675 bytes)
	I0701 14:38:49.612700 3774537 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem (1708 bytes)
	I0701 14:38:49.612727 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/3713725.pem -> /usr/share/ca-certificates/3713725.pem
	I0701 14:38:49.612740 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem -> /usr/share/ca-certificates/37137252.pem
	I0701 14:38:49.612754 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 14:38:49.612810 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646
	I0701 14:38:49.638526 3774537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33960 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/ha-767646/id_rsa Username:docker}
	I0701 14:38:49.757302 3774537 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0701 14:38:49.763819 3774537 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0701 14:38:49.785172 3774537 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0701 14:38:49.795947 3774537 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0701 14:38:49.843037 3774537 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0701 14:38:49.858499 3774537 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0701 14:38:49.888306 3774537 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0701 14:38:49.902555 3774537 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0701 14:38:49.928613 3774537 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0701 14:38:49.941484 3774537 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0701 14:38:49.970415 3774537 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0701 14:38:49.979990 3774537 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0701 14:38:50.006420 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 14:38:50.046779 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 14:38:50.084975 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 14:38:50.132603 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 14:38:50.173181 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0701 14:38:50.212549 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 14:38:50.253542 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 14:38:50.290310 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0701 14:38:50.330747 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/3713725.pem --> /usr/share/ca-certificates/3713725.pem (1338 bytes)
	I0701 14:38:50.360724 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem --> /usr/share/ca-certificates/37137252.pem (1708 bytes)
	I0701 14:38:50.399307 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 14:38:50.432496 3774537 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0701 14:38:50.465687 3774537 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0701 14:38:50.493386 3774537 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0701 14:38:50.523625 3774537 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0701 14:38:50.550455 3774537 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0701 14:38:50.573669 3774537 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0701 14:38:50.608285 3774537 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0701 14:38:50.643482 3774537 ssh_runner.go:195] Run: openssl version
	I0701 14:38:50.653579 3774537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 14:38:50.663562 3774537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 14:38:50.669464 3774537 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 14:16 /usr/share/ca-certificates/minikubeCA.pem
	I0701 14:38:50.669543 3774537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 14:38:50.676754 3774537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 14:38:50.690418 3774537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3713725.pem && ln -fs /usr/share/ca-certificates/3713725.pem /etc/ssl/certs/3713725.pem"
	I0701 14:38:50.707650 3774537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3713725.pem
	I0701 14:38:50.711344 3774537 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 14:25 /usr/share/ca-certificates/3713725.pem
	I0701 14:38:50.711422 3774537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3713725.pem
	I0701 14:38:50.722727 3774537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3713725.pem /etc/ssl/certs/51391683.0"
	I0701 14:38:50.733052 3774537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/37137252.pem && ln -fs /usr/share/ca-certificates/37137252.pem /etc/ssl/certs/37137252.pem"
	I0701 14:38:50.750608 3774537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/37137252.pem
	I0701 14:38:50.754469 3774537 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 14:25 /usr/share/ca-certificates/37137252.pem
	I0701 14:38:50.754552 3774537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/37137252.pem
	I0701 14:38:50.765734 3774537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/37137252.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 14:38:50.778595 3774537 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 14:38:50.782672 3774537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 14:38:50.794030 3774537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 14:38:50.801588 3774537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 14:38:50.808733 3774537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 14:38:50.816338 3774537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 14:38:50.823417 3774537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 14:38:50.832931 3774537 kubeadm.go:928] updating node {m02 192.168.49.3 8443 v1.30.2 crio true true} ...
	I0701 14:38:50.833057 3774537 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-767646-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-767646 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 14:38:50.833088 3774537 kube-vip.go:115] generating kube-vip config ...
	I0701 14:38:50.833143 3774537 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0701 14:38:50.862816 3774537 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0701 14:38:50.862894 3774537 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0701 14:38:50.862968 3774537 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 14:38:50.876297 3774537 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 14:38:50.876379 3774537 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0701 14:38:50.888083 3774537 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0701 14:38:50.917995 3774537 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 14:38:50.952117 3774537 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0701 14:38:50.978453 3774537 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0701 14:38:50.982533 3774537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 14:38:50.993976 3774537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 14:38:51.139502 3774537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 14:38:51.158707 3774537 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0701 14:38:51.159110 3774537 config.go:182] Loaded profile config "ha-767646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:38:51.165118 3774537 out.go:177] * Verifying Kubernetes components...
	I0701 14:38:51.167445 3774537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 14:38:51.327310 3774537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 14:38:51.343296 3774537 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-3708336/kubeconfig
	I0701 14:38:51.343603 3774537 kapi.go:59] client config for ha-767646: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/client.key", CAFile:"/home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x179ece0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0701 14:38:51.343671 3774537 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0701 14:38:51.345091 3774537 node_ready.go:35] waiting up to 6m0s for node "ha-767646-m02" to be "Ready" ...
	I0701 14:38:51.345195 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m02
	I0701 14:38:51.345207 3774537 round_trippers.go:469] Request Headers:
	I0701 14:38:51.345216 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:38:51.345224 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:02.885597 3774537 round_trippers.go:574] Response Status: 500 Internal Server Error in 11540 milliseconds
	I0701 14:39:02.886042 3774537 node_ready.go:53] error getting node "ha-767646-m02": etcdserver: request timed out
	I0701 14:39:02.886107 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m02
	I0701 14:39:02.886121 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:02.886130 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:02.886143 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:06.443902 3774537 round_trippers.go:574] Response Status: 200 OK in 3557 milliseconds
	I0701 14:39:06.455912 3774537 node_ready.go:49] node "ha-767646-m02" has status "Ready":"True"
	I0701 14:39:06.455935 3774537 node_ready.go:38] duration metric: took 15.110806408s for node "ha-767646-m02" to be "Ready" ...
	I0701 14:39:06.455945 3774537 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 14:39:06.456013 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0701 14:39:06.456019 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:06.456027 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:06.456030 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:06.490766 3774537 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0701 14:39:06.507730 3774537 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ggtnh" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:06.508950 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:39:06.508980 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:06.509005 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:06.509048 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:06.514293 3774537 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 14:39:06.515319 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:39:06.515334 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:06.515343 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:06.515346 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:06.517973 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:39:06.518839 3774537 pod_ready.go:92] pod "coredns-7db6d8ff4d-ggtnh" in "kube-system" namespace has status "Ready":"True"
	I0701 14:39:06.518855 3774537 pod_ready.go:81] duration metric: took 10.008952ms for pod "coredns-7db6d8ff4d-ggtnh" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:06.518867 3774537 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-tv8kl" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:06.518926 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tv8kl
	I0701 14:39:06.518931 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:06.518940 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:06.518944 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:06.523192 3774537 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 14:39:06.523864 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:39:06.523881 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:06.523890 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:06.523910 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:06.526898 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:39:06.527768 3774537 pod_ready.go:92] pod "coredns-7db6d8ff4d-tv8kl" in "kube-system" namespace has status "Ready":"True"
	I0701 14:39:06.527789 3774537 pod_ready.go:81] duration metric: took 8.915074ms for pod "coredns-7db6d8ff4d-tv8kl" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:06.527801 3774537 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-767646" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:06.527882 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-767646
	I0701 14:39:06.527907 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:06.527925 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:06.527936 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:06.530530 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:39:06.531185 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:39:06.531202 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:06.531210 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:06.531216 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:06.533930 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:39:06.534520 3774537 pod_ready.go:92] pod "etcd-ha-767646" in "kube-system" namespace has status "Ready":"True"
	I0701 14:39:06.534546 3774537 pod_ready.go:81] duration metric: took 6.719943ms for pod "etcd-ha-767646" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:06.534557 3774537 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-767646-m02" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:06.534654 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-767646-m02
	I0701 14:39:06.534662 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:06.534671 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:06.534679 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:06.537109 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:39:06.539350 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m02
	I0701 14:39:06.539370 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:06.539379 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:06.539382 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:06.542263 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:39:06.542862 3774537 pod_ready.go:92] pod "etcd-ha-767646-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 14:39:06.542884 3774537 pod_ready.go:81] duration metric: took 8.297813ms for pod "etcd-ha-767646-m02" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:06.542895 3774537 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-767646-m03" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:06.542954 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-767646-m03
	I0701 14:39:06.542964 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:06.542972 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:06.542976 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:06.545100 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:39:06.656131 3774537 request.go:629] Waited for 110.180433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646-m03
	I0701 14:39:06.656205 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m03
	I0701 14:39:06.656216 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:06.656225 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:06.656231 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:06.663216 3774537 round_trippers.go:574] Response Status: 404 Not Found in 6 milliseconds
	I0701 14:39:06.663453 3774537 pod_ready.go:97] node "ha-767646-m03" hosting pod "etcd-ha-767646-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-767646-m03": nodes "ha-767646-m03" not found
	I0701 14:39:06.663484 3774537 pod_ready.go:81] duration metric: took 120.582069ms for pod "etcd-ha-767646-m03" in "kube-system" namespace to be "Ready" ...
	E0701 14:39:06.663505 3774537 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-767646-m03" hosting pod "etcd-ha-767646-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-767646-m03": nodes "ha-767646-m03" not found
	I0701 14:39:06.663538 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-767646" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:06.857107 3774537 request.go:629] Waited for 193.49144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-767646
	I0701 14:39:06.857165 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-767646
	I0701 14:39:06.857175 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:06.857192 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:06.857201 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:06.860161 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:39:07.056166 3774537 request.go:629] Waited for 195.310428ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:39:07.056240 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:39:07.056246 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:07.056255 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:07.056263 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:07.059375 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:39:07.060488 3774537 pod_ready.go:92] pod "kube-apiserver-ha-767646" in "kube-system" namespace has status "Ready":"True"
	I0701 14:39:07.060516 3774537 pod_ready.go:81] duration metric: took 396.965566ms for pod "kube-apiserver-ha-767646" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:07.060534 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-767646-m02" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:07.256406 3774537 request.go:629] Waited for 195.780225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-767646-m02
	I0701 14:39:07.256488 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-767646-m02
	I0701 14:39:07.256494 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:07.256503 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:07.256513 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:07.259580 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:39:07.456658 3774537 request.go:629] Waited for 196.344801ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646-m02
	I0701 14:39:07.456727 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m02
	I0701 14:39:07.456732 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:07.456741 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:07.456745 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:07.459786 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:39:07.460379 3774537 pod_ready.go:92] pod "kube-apiserver-ha-767646-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 14:39:07.460402 3774537 pod_ready.go:81] duration metric: took 399.858272ms for pod "kube-apiserver-ha-767646-m02" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:07.460417 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-767646-m03" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:07.656823 3774537 request.go:629] Waited for 196.324255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-767646-m03
	I0701 14:39:07.656900 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-767646-m03
	I0701 14:39:07.656906 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:07.656915 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:07.656924 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:07.659758 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:39:07.856862 3774537 request.go:629] Waited for 196.348912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646-m03
	I0701 14:39:07.856945 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m03
	I0701 14:39:07.856956 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:07.857047 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:07.857058 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:07.860414 3774537 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0701 14:39:07.860762 3774537 pod_ready.go:97] node "ha-767646-m03" hosting pod "kube-apiserver-ha-767646-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-767646-m03": nodes "ha-767646-m03" not found
	I0701 14:39:07.860785 3774537 pod_ready.go:81] duration metric: took 400.351183ms for pod "kube-apiserver-ha-767646-m03" in "kube-system" namespace to be "Ready" ...
	E0701 14:39:07.860814 3774537 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-767646-m03" hosting pod "kube-apiserver-ha-767646-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-767646-m03": nodes "ha-767646-m03" not found
	I0701 14:39:07.860824 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-767646" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:08.056120 3774537 request.go:629] Waited for 195.157761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-767646
	I0701 14:39:08.056213 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-767646
	I0701 14:39:08.056229 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:08.056250 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:08.056256 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:08.059979 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:39:08.256603 3774537 request.go:629] Waited for 195.151797ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:39:08.256677 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:39:08.256687 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:08.256696 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:08.256738 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:08.259852 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:39:08.260948 3774537 pod_ready.go:92] pod "kube-controller-manager-ha-767646" in "kube-system" namespace has status "Ready":"True"
	I0701 14:39:08.260981 3774537 pod_ready.go:81] duration metric: took 400.138307ms for pod "kube-controller-manager-ha-767646" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:08.261000 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-767646-m02" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:08.456954 3774537 request.go:629] Waited for 195.87676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-767646-m02
	I0701 14:39:08.457071 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-767646-m02
	I0701 14:39:08.457088 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:08.457097 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:08.457108 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:08.460386 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:39:08.657095 3774537 request.go:629] Waited for 195.384513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646-m02
	I0701 14:39:08.657173 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m02
	I0701 14:39:08.657184 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:08.657193 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:08.657218 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:08.660912 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:39:08.662259 3774537 pod_ready.go:92] pod "kube-controller-manager-ha-767646-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 14:39:08.662291 3774537 pod_ready.go:81] duration metric: took 401.276099ms for pod "kube-controller-manager-ha-767646-m02" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:08.662320 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-767646-m03" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:08.856299 3774537 request.go:629] Waited for 193.908298ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-767646-m03
	I0701 14:39:08.856375 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-767646-m03
	I0701 14:39:08.856401 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:08.856413 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:08.856418 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:08.864499 3774537 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0701 14:39:09.057102 3774537 request.go:629] Waited for 191.319783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646-m03
	I0701 14:39:09.057182 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m03
	I0701 14:39:09.057192 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:09.057201 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:09.057210 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:09.060902 3774537 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0701 14:39:09.061318 3774537 pod_ready.go:97] node "ha-767646-m03" hosting pod "kube-controller-manager-ha-767646-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-767646-m03": nodes "ha-767646-m03" not found
	I0701 14:39:09.061342 3774537 pod_ready.go:81] duration metric: took 399.012807ms for pod "kube-controller-manager-ha-767646-m03" in "kube-system" namespace to be "Ready" ...
	E0701 14:39:09.061369 3774537 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-767646-m03" hosting pod "kube-controller-manager-ha-767646-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-767646-m03": nodes "ha-767646-m03" not found
	I0701 14:39:09.061382 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-48fx2" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:09.256595 3774537 request.go:629] Waited for 195.111091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-48fx2
	I0701 14:39:09.256714 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-48fx2
	I0701 14:39:09.256726 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:09.256741 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:09.256751 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:09.260406 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:39:09.456688 3774537 request.go:629] Waited for 194.850542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646-m03
	I0701 14:39:09.456794 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m03
	I0701 14:39:09.456805 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:09.456844 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:09.456858 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:09.460140 3774537 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0701 14:39:09.460503 3774537 pod_ready.go:97] node "ha-767646-m03" hosting pod "kube-proxy-48fx2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-767646-m03": nodes "ha-767646-m03" not found
	I0701 14:39:09.460524 3774537 pod_ready.go:81] duration metric: took 399.132431ms for pod "kube-proxy-48fx2" in "kube-system" namespace to be "Ready" ...
	E0701 14:39:09.460535 3774537 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-767646-m03" hosting pod "kube-proxy-48fx2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-767646-m03": nodes "ha-767646-m03" not found
	I0701 14:39:09.460548 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6gt25" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:09.656764 3774537 request.go:629] Waited for 196.121144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gt25
	I0701 14:39:09.656860 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gt25
	I0701 14:39:09.656887 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:09.656900 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:09.656904 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:09.665066 3774537 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0701 14:39:09.856593 3774537 request.go:629] Waited for 190.133245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:39:09.856766 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:39:09.856789 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:09.856802 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:09.856806 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:09.883495 3774537 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0701 14:39:09.888204 3774537 pod_ready.go:92] pod "kube-proxy-6gt25" in "kube-system" namespace has status "Ready":"True"
	I0701 14:39:09.888228 3774537 pod_ready.go:81] duration metric: took 427.656076ms for pod "kube-proxy-6gt25" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:09.888250 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dz99m" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:10.056567 3774537 request.go:629] Waited for 168.234822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dz99m
	I0701 14:39:10.056658 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dz99m
	I0701 14:39:10.056677 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:10.056692 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:10.056698 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:10.059404 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:39:10.256719 3774537 request.go:629] Waited for 196.302331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646-m04
	I0701 14:39:10.256798 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m04
	I0701 14:39:10.256809 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:10.256819 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:10.256828 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:10.259450 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:39:10.260452 3774537 pod_ready.go:92] pod "kube-proxy-dz99m" in "kube-system" namespace has status "Ready":"True"
	I0701 14:39:10.260476 3774537 pod_ready.go:81] duration metric: took 372.211929ms for pod "kube-proxy-dz99m" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:10.260489 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s476n" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:10.456461 3774537 request.go:629] Waited for 195.897068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s476n
	I0701 14:39:10.456529 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s476n
	I0701 14:39:10.456541 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:10.456551 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:10.456557 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:10.459078 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:39:10.656393 3774537 request.go:629] Waited for 196.303685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646-m02
	I0701 14:39:10.656456 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m02
	I0701 14:39:10.656464 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:10.656473 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:10.656481 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:10.659120 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:39:10.660152 3774537 pod_ready.go:92] pod "kube-proxy-s476n" in "kube-system" namespace has status "Ready":"True"
	I0701 14:39:10.660175 3774537 pod_ready.go:81] duration metric: took 399.665262ms for pod "kube-proxy-s476n" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:10.660188 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-767646" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:10.856555 3774537 request.go:629] Waited for 196.300903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-767646
	I0701 14:39:10.856629 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-767646
	I0701 14:39:10.856639 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:10.856648 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:10.856656 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:10.859329 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:39:11.056116 3774537 request.go:629] Waited for 196.183643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:39:11.056189 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:39:11.056198 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:11.056215 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:11.056224 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:11.058781 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:39:11.059623 3774537 pod_ready.go:92] pod "kube-scheduler-ha-767646" in "kube-system" namespace has status "Ready":"True"
	I0701 14:39:11.059648 3774537 pod_ready.go:81] duration metric: took 399.445714ms for pod "kube-scheduler-ha-767646" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:11.059661 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-767646-m02" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:11.257037 3774537 request.go:629] Waited for 197.299313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-767646-m02
	I0701 14:39:11.257103 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-767646-m02
	I0701 14:39:11.257118 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:11.257127 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:11.257139 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:11.262332 3774537 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 14:39:11.456081 3774537 request.go:629] Waited for 193.248451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646-m02
	I0701 14:39:11.456146 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m02
	I0701 14:39:11.456152 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:11.456162 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:11.456165 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:11.460059 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:39:11.461004 3774537 pod_ready.go:92] pod "kube-scheduler-ha-767646-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 14:39:11.461055 3774537 pod_ready.go:81] duration metric: took 401.386025ms for pod "kube-scheduler-ha-767646-m02" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:11.461068 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-767646-m03" in "kube-system" namespace to be "Ready" ...
	I0701 14:39:11.656422 3774537 request.go:629] Waited for 195.288988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-767646-m03
	I0701 14:39:11.656488 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-767646-m03
	I0701 14:39:11.656494 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:11.656503 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:11.656506 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:11.659561 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:39:11.856086 3774537 request.go:629] Waited for 195.57713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646-m03
	I0701 14:39:11.856138 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m03
	I0701 14:39:11.856145 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:11.856152 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:11.856156 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:11.860049 3774537 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0701 14:39:11.860343 3774537 pod_ready.go:97] node "ha-767646-m03" hosting pod "kube-scheduler-ha-767646-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-767646-m03": nodes "ha-767646-m03" not found
	I0701 14:39:11.860370 3774537 pod_ready.go:81] duration metric: took 399.294444ms for pod "kube-scheduler-ha-767646-m03" in "kube-system" namespace to be "Ready" ...
	E0701 14:39:11.860380 3774537 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-767646-m03" hosting pod "kube-scheduler-ha-767646-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-767646-m03": nodes "ha-767646-m03" not found
	I0701 14:39:11.860389 3774537 pod_ready.go:38] duration metric: took 5.404433447s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 14:39:11.860403 3774537 api_server.go:52] waiting for apiserver process to appear ...
	I0701 14:39:11.860464 3774537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 14:39:11.890778 3774537 api_server.go:72] duration metric: took 20.7320226s to wait for apiserver process to appear ...
	I0701 14:39:11.890800 3774537 api_server.go:88] waiting for apiserver healthz status ...
	I0701 14:39:11.890819 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:11.901862 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:11.901889 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:12.391088 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:12.399207 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:12.399230 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:12.891879 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:12.900887 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:12.900916 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:13.391233 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:13.398976 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:13.399016 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:13.891590 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:13.920884 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:13.920922 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:14.391271 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:14.398987 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:14.399021 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:14.891715 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:14.899388 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:14.899415 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:15.390883 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:15.398685 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:15.398715 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:15.891236 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:15.898751 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:15.898778 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:16.391297 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:16.399001 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:16.399029 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:16.891606 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:16.899392 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:16.899422 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:17.391028 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:17.398909 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:17.398940 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:17.891436 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:17.900068 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:17.900094 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:18.391757 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:18.400224 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:18.400258 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:18.891578 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:18.899757 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:18.899802 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:19.391250 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:19.399837 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:19.399864 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:19.891582 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:19.899258 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:19.899291 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:20.391877 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:20.399746 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:20.399793 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:20.891129 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:20.899068 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:20.899136 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:21.391635 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:21.399552 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:21.399590 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:21.891366 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:21.899067 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:21.899096 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:22.391665 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:22.400066 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:22.400113 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:22.891506 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:22.899137 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:22.899165 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:23.391817 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:23.399515 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:23.399551 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:23.890944 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:23.898691 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:23.898735 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:24.391139 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:24.398723 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:24.398750 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:24.891075 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:24.898712 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:24.898740 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:25.391259 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:25.398853 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:25.398882 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:25.890978 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:25.902407 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:25.902500 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:26.390939 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:26.400040 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:26.400070 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:26.891488 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:26.899312 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:26.899341 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:27.391848 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:27.401248 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:27.401289 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:27.891764 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:27.899479 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:27.899520 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:28.390936 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:28.398563 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:28.398593 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:28.891847 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:28.917873 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:28.917958 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:29.391083 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:29.546567 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:29.546604 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:29.891924 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:29.900279 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:29.900308 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:30.390922 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:30.399123 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:30.399147 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:30.891724 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:30.899992 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:30.900018 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:31.391697 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:31.399310 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:31.399343 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:31.891157 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:31.898758 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:31.898784 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:32.391292 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:32.398881 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:32.398907 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:32.891518 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:32.899238 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:32.899267 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:33.391953 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:33.420989 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:33.421041 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:33.891606 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:33.899311 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:33.899348 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:34.391782 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:34.399589 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:34.399634 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:34.890946 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:34.898522 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:34.898548 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:35.391072 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:35.398762 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:35.398793 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:35.891075 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:35.898801 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:35.898831 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:36.391351 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:36.399456 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:36.399483 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:36.891247 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:36.899001 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:36.899036 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:37.391631 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:37.399715 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:37.399746 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:37.890952 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:37.898577 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:37.898610 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:38.391118 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:38.400516 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:38.400553 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:38.890983 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:38.907463 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:38.907497 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:39.391890 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:39.399499 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:39.399530 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:39.891405 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:39.899250 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:39.899277 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:40.391906 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:40.401421 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:40.401448 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:40.890941 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:40.898576 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:40.898614 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:41.391832 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:41.399583 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:41.399611 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:41.891892 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:41.901831 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:41.901866 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:42.391282 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:42.399327 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:42.399358 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:42.891823 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:42.899506 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:42.899534 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:43.390996 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:43.398677 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:43.398704 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:43.891030 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:43.898637 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:43.898668 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:44.391920 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:44.399595 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:44.399622 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:44.891119 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:44.898911 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:44.898938 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:45.391303 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:45.399144 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:45.399175 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:45.891893 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:45.899803 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:45.899836 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:46.391400 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:46.399040 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:46.399070 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:46.891567 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:46.899724 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:46.899754 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:47.390970 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:47.398934 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:47.398962 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:47.891325 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:47.899090 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:47.899118 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:48.391710 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:48.400655 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:48.400683 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:48.890954 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:48.898679 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:48.898714 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:49.390973 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:49.398678 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:49.398703 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:49.891477 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:49.899190 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:49.899215 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:50.391838 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:50.399606 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:50.399645 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:50.891037 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:50.898731 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0701 14:39:50.898755 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0701 14:39:51.390981 3774537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0701 14:39:51.391091 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 14:39:51.440730 3774537 cri.go:89] found id: "d77f8585300428bd164c95feaab89b1335e86776e85ff381369861ae5657dd5a"
	I0701 14:39:51.440756 3774537 cri.go:89] found id: "61d1f43c44b63951b2d28b886fc46704fb748c39efeb51e95d66e758e6b1b483"
	I0701 14:39:51.440761 3774537 cri.go:89] found id: ""
	I0701 14:39:51.440768 3774537 logs.go:276] 2 containers: [d77f8585300428bd164c95feaab89b1335e86776e85ff381369861ae5657dd5a 61d1f43c44b63951b2d28b886fc46704fb748c39efeb51e95d66e758e6b1b483]
	I0701 14:39:51.440826 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:51.444546 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:51.448261 3774537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0701 14:39:51.448333 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 14:39:51.487807 3774537 cri.go:89] found id: "21a87bbb4816e8090d7cb1ceeb27b972fda5f8009ce284a524f93607b473d933"
	I0701 14:39:51.487834 3774537 cri.go:89] found id: "63af94a46f8a82db34e70144fb5439cec7638dac54519a0a8be7d3fb88d4c491"
	I0701 14:39:51.487839 3774537 cri.go:89] found id: ""
	I0701 14:39:51.487846 3774537 logs.go:276] 2 containers: [21a87bbb4816e8090d7cb1ceeb27b972fda5f8009ce284a524f93607b473d933 63af94a46f8a82db34e70144fb5439cec7638dac54519a0a8be7d3fb88d4c491]
	I0701 14:39:51.487902 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:51.491650 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:51.495251 3774537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0701 14:39:51.495317 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 14:39:51.535911 3774537 cri.go:89] found id: ""
	I0701 14:39:51.535933 3774537 logs.go:276] 0 containers: []
	W0701 14:39:51.535942 3774537 logs.go:278] No container was found matching "coredns"
	I0701 14:39:51.535948 3774537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0701 14:39:51.536004 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 14:39:51.576361 3774537 cri.go:89] found id: "8ab7acf126a897f0e6cf3bb916c9f90317567d515a65e6b2b232dded17438c5b"
	I0701 14:39:51.576429 3774537 cri.go:89] found id: "ae6119e19a78edb4f37833ebf8f67a6500d59dc074573f5b47dbaea9faf2fe7d"
	I0701 14:39:51.576448 3774537 cri.go:89] found id: ""
	I0701 14:39:51.576471 3774537 logs.go:276] 2 containers: [8ab7acf126a897f0e6cf3bb916c9f90317567d515a65e6b2b232dded17438c5b ae6119e19a78edb4f37833ebf8f67a6500d59dc074573f5b47dbaea9faf2fe7d]
	I0701 14:39:51.576564 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:51.580484 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:51.584343 3774537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0701 14:39:51.584488 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 14:39:51.621487 3774537 cri.go:89] found id: "95587abe2be5dafe9bc8249c75bf8c72dbea9fefad322ebcbe3d6344b430af3f"
	I0701 14:39:51.621510 3774537 cri.go:89] found id: ""
	I0701 14:39:51.621518 3774537 logs.go:276] 1 containers: [95587abe2be5dafe9bc8249c75bf8c72dbea9fefad322ebcbe3d6344b430af3f]
	I0701 14:39:51.621574 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:51.625203 3774537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 14:39:51.625279 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 14:39:51.661524 3774537 cri.go:89] found id: "cad3b892e384c73b0b522749ba9ec06698a7feae0fc7e80f3803fe607d4810ce"
	I0701 14:39:51.661547 3774537 cri.go:89] found id: "bc6844515f5e1ac8674f38e8bab7c00008d602ac45a65b4df477e29b47f52119"
	I0701 14:39:51.661553 3774537 cri.go:89] found id: ""
	I0701 14:39:51.661560 3774537 logs.go:276] 2 containers: [cad3b892e384c73b0b522749ba9ec06698a7feae0fc7e80f3803fe607d4810ce bc6844515f5e1ac8674f38e8bab7c00008d602ac45a65b4df477e29b47f52119]
	I0701 14:39:51.661618 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:51.665037 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:51.668326 3774537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0701 14:39:51.668442 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0701 14:39:51.709443 3774537 cri.go:89] found id: "3f404c1e27a970bd3d1753095eeeeacd481784246413c4ff68f82701e513c1ba"
	I0701 14:39:51.709467 3774537 cri.go:89] found id: ""
	I0701 14:39:51.709486 3774537 logs.go:276] 1 containers: [3f404c1e27a970bd3d1753095eeeeacd481784246413c4ff68f82701e513c1ba]
	I0701 14:39:51.709550 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:51.713281 3774537 logs.go:123] Gathering logs for dmesg ...
	I0701 14:39:51.713350 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 14:39:51.735433 3774537 logs.go:123] Gathering logs for kindnet [3f404c1e27a970bd3d1753095eeeeacd481784246413c4ff68f82701e513c1ba] ...
	I0701 14:39:51.735464 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f404c1e27a970bd3d1753095eeeeacd481784246413c4ff68f82701e513c1ba"
	I0701 14:39:51.779381 3774537 logs.go:123] Gathering logs for kubelet ...
	I0701 14:39:51.779407 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 14:39:51.844848 3774537 logs.go:123] Gathering logs for describe nodes ...
	I0701 14:39:51.844884 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 14:39:52.201718 3774537 logs.go:123] Gathering logs for kube-controller-manager [cad3b892e384c73b0b522749ba9ec06698a7feae0fc7e80f3803fe607d4810ce] ...
	I0701 14:39:52.201753 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cad3b892e384c73b0b522749ba9ec06698a7feae0fc7e80f3803fe607d4810ce"
	I0701 14:39:52.270504 3774537 logs.go:123] Gathering logs for kube-controller-manager [bc6844515f5e1ac8674f38e8bab7c00008d602ac45a65b4df477e29b47f52119] ...
	I0701 14:39:52.270539 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc6844515f5e1ac8674f38e8bab7c00008d602ac45a65b4df477e29b47f52119"
	I0701 14:39:52.305957 3774537 logs.go:123] Gathering logs for CRI-O ...
	I0701 14:39:52.305985 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0701 14:39:52.371781 3774537 logs.go:123] Gathering logs for kube-apiserver [61d1f43c44b63951b2d28b886fc46704fb748c39efeb51e95d66e758e6b1b483] ...
	I0701 14:39:52.371815 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61d1f43c44b63951b2d28b886fc46704fb748c39efeb51e95d66e758e6b1b483"
	I0701 14:39:52.412946 3774537 logs.go:123] Gathering logs for kube-scheduler [8ab7acf126a897f0e6cf3bb916c9f90317567d515a65e6b2b232dded17438c5b] ...
	I0701 14:39:52.412975 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ab7acf126a897f0e6cf3bb916c9f90317567d515a65e6b2b232dded17438c5b"
	I0701 14:39:52.452036 3774537 logs.go:123] Gathering logs for kube-apiserver [d77f8585300428bd164c95feaab89b1335e86776e85ff381369861ae5657dd5a] ...
	I0701 14:39:52.452062 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d77f8585300428bd164c95feaab89b1335e86776e85ff381369861ae5657dd5a"
	I0701 14:39:52.496010 3774537 logs.go:123] Gathering logs for etcd [21a87bbb4816e8090d7cb1ceeb27b972fda5f8009ce284a524f93607b473d933] ...
	I0701 14:39:52.496037 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21a87bbb4816e8090d7cb1ceeb27b972fda5f8009ce284a524f93607b473d933"
	I0701 14:39:52.556858 3774537 logs.go:123] Gathering logs for etcd [63af94a46f8a82db34e70144fb5439cec7638dac54519a0a8be7d3fb88d4c491] ...
	I0701 14:39:52.556889 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63af94a46f8a82db34e70144fb5439cec7638dac54519a0a8be7d3fb88d4c491"
	I0701 14:39:52.606964 3774537 logs.go:123] Gathering logs for kube-scheduler [ae6119e19a78edb4f37833ebf8f67a6500d59dc074573f5b47dbaea9faf2fe7d] ...
	I0701 14:39:52.607000 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae6119e19a78edb4f37833ebf8f67a6500d59dc074573f5b47dbaea9faf2fe7d"
	I0701 14:39:52.645916 3774537 logs.go:123] Gathering logs for kube-proxy [95587abe2be5dafe9bc8249c75bf8c72dbea9fefad322ebcbe3d6344b430af3f] ...
	I0701 14:39:52.645985 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95587abe2be5dafe9bc8249c75bf8c72dbea9fefad322ebcbe3d6344b430af3f"
	I0701 14:39:52.684700 3774537 logs.go:123] Gathering logs for container status ...
	I0701 14:39:52.684773 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 14:39:55.228229 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:55.869728 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0701 14:39:55.869756 3774537 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0701 14:39:55.869788 3774537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0701 14:39:55.869858 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 14:39:55.939946 3774537 cri.go:89] found id: "d77f8585300428bd164c95feaab89b1335e86776e85ff381369861ae5657dd5a"
	I0701 14:39:55.940004 3774537 cri.go:89] found id: "61d1f43c44b63951b2d28b886fc46704fb748c39efeb51e95d66e758e6b1b483"
	I0701 14:39:55.940038 3774537 cri.go:89] found id: ""
	I0701 14:39:55.940058 3774537 logs.go:276] 2 containers: [d77f8585300428bd164c95feaab89b1335e86776e85ff381369861ae5657dd5a 61d1f43c44b63951b2d28b886fc46704fb748c39efeb51e95d66e758e6b1b483]
	I0701 14:39:55.940145 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:55.943946 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:55.947331 3774537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0701 14:39:55.947466 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 14:39:56.029163 3774537 cri.go:89] found id: "21a87bbb4816e8090d7cb1ceeb27b972fda5f8009ce284a524f93607b473d933"
	I0701 14:39:56.029183 3774537 cri.go:89] found id: "63af94a46f8a82db34e70144fb5439cec7638dac54519a0a8be7d3fb88d4c491"
	I0701 14:39:56.029192 3774537 cri.go:89] found id: ""
	I0701 14:39:56.029199 3774537 logs.go:276] 2 containers: [21a87bbb4816e8090d7cb1ceeb27b972fda5f8009ce284a524f93607b473d933 63af94a46f8a82db34e70144fb5439cec7638dac54519a0a8be7d3fb88d4c491]
	I0701 14:39:56.029276 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:56.039227 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:56.044626 3774537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0701 14:39:56.044816 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 14:39:56.109724 3774537 cri.go:89] found id: ""
	I0701 14:39:56.109820 3774537 logs.go:276] 0 containers: []
	W0701 14:39:56.109843 3774537 logs.go:278] No container was found matching "coredns"
	I0701 14:39:56.109864 3774537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0701 14:39:56.109988 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 14:39:56.159317 3774537 cri.go:89] found id: "8ab7acf126a897f0e6cf3bb916c9f90317567d515a65e6b2b232dded17438c5b"
	I0701 14:39:56.159391 3774537 cri.go:89] found id: "ae6119e19a78edb4f37833ebf8f67a6500d59dc074573f5b47dbaea9faf2fe7d"
	I0701 14:39:56.159410 3774537 cri.go:89] found id: ""
	I0701 14:39:56.159433 3774537 logs.go:276] 2 containers: [8ab7acf126a897f0e6cf3bb916c9f90317567d515a65e6b2b232dded17438c5b ae6119e19a78edb4f37833ebf8f67a6500d59dc074573f5b47dbaea9faf2fe7d]
	I0701 14:39:56.159530 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:56.164004 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:56.167551 3774537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0701 14:39:56.167664 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 14:39:56.206295 3774537 cri.go:89] found id: "95587abe2be5dafe9bc8249c75bf8c72dbea9fefad322ebcbe3d6344b430af3f"
	I0701 14:39:56.206319 3774537 cri.go:89] found id: ""
	I0701 14:39:56.206328 3774537 logs.go:276] 1 containers: [95587abe2be5dafe9bc8249c75bf8c72dbea9fefad322ebcbe3d6344b430af3f]
	I0701 14:39:56.206379 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:56.210189 3774537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 14:39:56.210261 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 14:39:56.250740 3774537 cri.go:89] found id: "cad3b892e384c73b0b522749ba9ec06698a7feae0fc7e80f3803fe607d4810ce"
	I0701 14:39:56.250764 3774537 cri.go:89] found id: "bc6844515f5e1ac8674f38e8bab7c00008d602ac45a65b4df477e29b47f52119"
	I0701 14:39:56.250770 3774537 cri.go:89] found id: ""
	I0701 14:39:56.250777 3774537 logs.go:276] 2 containers: [cad3b892e384c73b0b522749ba9ec06698a7feae0fc7e80f3803fe607d4810ce bc6844515f5e1ac8674f38e8bab7c00008d602ac45a65b4df477e29b47f52119]
	I0701 14:39:56.250861 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:56.256354 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:56.260253 3774537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0701 14:39:56.260368 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0701 14:39:56.308617 3774537 cri.go:89] found id: "3f404c1e27a970bd3d1753095eeeeacd481784246413c4ff68f82701e513c1ba"
	I0701 14:39:56.308693 3774537 cri.go:89] found id: ""
	I0701 14:39:56.308716 3774537 logs.go:276] 1 containers: [3f404c1e27a970bd3d1753095eeeeacd481784246413c4ff68f82701e513c1ba]
	I0701 14:39:56.308791 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:56.312539 3774537 logs.go:123] Gathering logs for etcd [63af94a46f8a82db34e70144fb5439cec7638dac54519a0a8be7d3fb88d4c491] ...
	I0701 14:39:56.312563 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63af94a46f8a82db34e70144fb5439cec7638dac54519a0a8be7d3fb88d4c491"
	I0701 14:39:56.373109 3774537 logs.go:123] Gathering logs for kube-scheduler [ae6119e19a78edb4f37833ebf8f67a6500d59dc074573f5b47dbaea9faf2fe7d] ...
	I0701 14:39:56.373144 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae6119e19a78edb4f37833ebf8f67a6500d59dc074573f5b47dbaea9faf2fe7d"
	I0701 14:39:56.417609 3774537 logs.go:123] Gathering logs for CRI-O ...
	I0701 14:39:56.417677 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0701 14:39:56.488176 3774537 logs.go:123] Gathering logs for kube-controller-manager [cad3b892e384c73b0b522749ba9ec06698a7feae0fc7e80f3803fe607d4810ce] ...
	I0701 14:39:56.488249 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cad3b892e384c73b0b522749ba9ec06698a7feae0fc7e80f3803fe607d4810ce"
	I0701 14:39:56.553741 3774537 logs.go:123] Gathering logs for kube-apiserver [d77f8585300428bd164c95feaab89b1335e86776e85ff381369861ae5657dd5a] ...
	I0701 14:39:56.553813 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d77f8585300428bd164c95feaab89b1335e86776e85ff381369861ae5657dd5a"
	I0701 14:39:56.612116 3774537 logs.go:123] Gathering logs for kube-apiserver [61d1f43c44b63951b2d28b886fc46704fb748c39efeb51e95d66e758e6b1b483] ...
	I0701 14:39:56.612149 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61d1f43c44b63951b2d28b886fc46704fb748c39efeb51e95d66e758e6b1b483"
	I0701 14:39:56.648481 3774537 logs.go:123] Gathering logs for etcd [21a87bbb4816e8090d7cb1ceeb27b972fda5f8009ce284a524f93607b473d933] ...
	I0701 14:39:56.648511 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21a87bbb4816e8090d7cb1ceeb27b972fda5f8009ce284a524f93607b473d933"
	I0701 14:39:56.702145 3774537 logs.go:123] Gathering logs for kube-controller-manager [bc6844515f5e1ac8674f38e8bab7c00008d602ac45a65b4df477e29b47f52119] ...
	I0701 14:39:56.702187 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc6844515f5e1ac8674f38e8bab7c00008d602ac45a65b4df477e29b47f52119"
	I0701 14:39:56.762826 3774537 logs.go:123] Gathering logs for kubelet ...
	I0701 14:39:56.762853 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 14:39:56.850494 3774537 logs.go:123] Gathering logs for dmesg ...
	I0701 14:39:56.852694 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 14:39:56.878730 3774537 logs.go:123] Gathering logs for kube-proxy [95587abe2be5dafe9bc8249c75bf8c72dbea9fefad322ebcbe3d6344b430af3f] ...
	I0701 14:39:56.878803 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95587abe2be5dafe9bc8249c75bf8c72dbea9fefad322ebcbe3d6344b430af3f"
	I0701 14:39:56.936942 3774537 logs.go:123] Gathering logs for container status ...
	I0701 14:39:56.937039 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 14:39:56.990042 3774537 logs.go:123] Gathering logs for describe nodes ...
	I0701 14:39:56.990072 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 14:39:57.349662 3774537 logs.go:123] Gathering logs for kube-scheduler [8ab7acf126a897f0e6cf3bb916c9f90317567d515a65e6b2b232dded17438c5b] ...
	I0701 14:39:57.349703 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ab7acf126a897f0e6cf3bb916c9f90317567d515a65e6b2b232dded17438c5b"
	I0701 14:39:57.391884 3774537 logs.go:123] Gathering logs for kindnet [3f404c1e27a970bd3d1753095eeeeacd481784246413c4ff68f82701e513c1ba] ...
	I0701 14:39:57.391914 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f404c1e27a970bd3d1753095eeeeacd481784246413c4ff68f82701e513c1ba"
	I0701 14:39:59.927496 3774537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0701 14:39:59.936943 3774537 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0701 14:39:59.937050 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0701 14:39:59.937063 3774537 round_trippers.go:469] Request Headers:
	I0701 14:39:59.937073 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:39:59.937077 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:39:59.950754 3774537 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0701 14:39:59.951858 3774537 api_server.go:141] control plane version: v1.30.2
	I0701 14:39:59.951887 3774537 api_server.go:131] duration metric: took 48.061080887s to wait for apiserver health ...
	I0701 14:39:59.951897 3774537 system_pods.go:43] waiting for kube-system pods to appear ...
	I0701 14:39:59.951919 3774537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0701 14:39:59.951986 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 14:39:59.990257 3774537 cri.go:89] found id: "d77f8585300428bd164c95feaab89b1335e86776e85ff381369861ae5657dd5a"
	I0701 14:39:59.990333 3774537 cri.go:89] found id: "61d1f43c44b63951b2d28b886fc46704fb748c39efeb51e95d66e758e6b1b483"
	I0701 14:39:59.990346 3774537 cri.go:89] found id: ""
	I0701 14:39:59.990354 3774537 logs.go:276] 2 containers: [d77f8585300428bd164c95feaab89b1335e86776e85ff381369861ae5657dd5a 61d1f43c44b63951b2d28b886fc46704fb748c39efeb51e95d66e758e6b1b483]
	I0701 14:39:59.990417 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:59.994029 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:39:59.997721 3774537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0701 14:39:59.997822 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 14:40:00.083728 3774537 cri.go:89] found id: "21a87bbb4816e8090d7cb1ceeb27b972fda5f8009ce284a524f93607b473d933"
	I0701 14:40:00.083804 3774537 cri.go:89] found id: "63af94a46f8a82db34e70144fb5439cec7638dac54519a0a8be7d3fb88d4c491"
	I0701 14:40:00.083837 3774537 cri.go:89] found id: ""
	I0701 14:40:00.083875 3774537 logs.go:276] 2 containers: [21a87bbb4816e8090d7cb1ceeb27b972fda5f8009ce284a524f93607b473d933 63af94a46f8a82db34e70144fb5439cec7638dac54519a0a8be7d3fb88d4c491]
	I0701 14:40:00.083975 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:40:00.089713 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:40:00.100662 3774537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0701 14:40:00.100797 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 14:40:00.207210 3774537 cri.go:89] found id: ""
	I0701 14:40:00.207300 3774537 logs.go:276] 0 containers: []
	W0701 14:40:00.207326 3774537 logs.go:278] No container was found matching "coredns"
	I0701 14:40:00.207355 3774537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0701 14:40:00.207477 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 14:40:00.335109 3774537 cri.go:89] found id: "8ab7acf126a897f0e6cf3bb916c9f90317567d515a65e6b2b232dded17438c5b"
	I0701 14:40:00.335187 3774537 cri.go:89] found id: "ae6119e19a78edb4f37833ebf8f67a6500d59dc074573f5b47dbaea9faf2fe7d"
	I0701 14:40:00.335207 3774537 cri.go:89] found id: ""
	I0701 14:40:00.335233 3774537 logs.go:276] 2 containers: [8ab7acf126a897f0e6cf3bb916c9f90317567d515a65e6b2b232dded17438c5b ae6119e19a78edb4f37833ebf8f67a6500d59dc074573f5b47dbaea9faf2fe7d]
	I0701 14:40:00.335325 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:40:00.341770 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:40:00.352795 3774537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0701 14:40:00.352964 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 14:40:00.415460 3774537 cri.go:89] found id: "95587abe2be5dafe9bc8249c75bf8c72dbea9fefad322ebcbe3d6344b430af3f"
	I0701 14:40:00.415575 3774537 cri.go:89] found id: ""
	I0701 14:40:00.415601 3774537 logs.go:276] 1 containers: [95587abe2be5dafe9bc8249c75bf8c72dbea9fefad322ebcbe3d6344b430af3f]
	I0701 14:40:00.415740 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:40:00.420690 3774537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 14:40:00.420838 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 14:40:00.473230 3774537 cri.go:89] found id: "cad3b892e384c73b0b522749ba9ec06698a7feae0fc7e80f3803fe607d4810ce"
	I0701 14:40:00.473294 3774537 cri.go:89] found id: "bc6844515f5e1ac8674f38e8bab7c00008d602ac45a65b4df477e29b47f52119"
	I0701 14:40:00.473314 3774537 cri.go:89] found id: ""
	I0701 14:40:00.473339 3774537 logs.go:276] 2 containers: [cad3b892e384c73b0b522749ba9ec06698a7feae0fc7e80f3803fe607d4810ce bc6844515f5e1ac8674f38e8bab7c00008d602ac45a65b4df477e29b47f52119]
	I0701 14:40:00.473428 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:40:00.477348 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:40:00.481378 3774537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0701 14:40:00.481501 3774537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0701 14:40:00.533004 3774537 cri.go:89] found id: "3f404c1e27a970bd3d1753095eeeeacd481784246413c4ff68f82701e513c1ba"
	I0701 14:40:00.533108 3774537 cri.go:89] found id: ""
	I0701 14:40:00.533140 3774537 logs.go:276] 1 containers: [3f404c1e27a970bd3d1753095eeeeacd481784246413c4ff68f82701e513c1ba]
	I0701 14:40:00.533229 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:40:00.539701 3774537 logs.go:123] Gathering logs for kubelet ...
	I0701 14:40:00.539782 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0701 14:40:00.614930 3774537 logs.go:123] Gathering logs for etcd [63af94a46f8a82db34e70144fb5439cec7638dac54519a0a8be7d3fb88d4c491] ...
	I0701 14:40:00.614967 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63af94a46f8a82db34e70144fb5439cec7638dac54519a0a8be7d3fb88d4c491"
	I0701 14:40:00.682217 3774537 logs.go:123] Gathering logs for kube-scheduler [8ab7acf126a897f0e6cf3bb916c9f90317567d515a65e6b2b232dded17438c5b] ...
	I0701 14:40:00.682256 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ab7acf126a897f0e6cf3bb916c9f90317567d515a65e6b2b232dded17438c5b"
	I0701 14:40:00.739759 3774537 logs.go:123] Gathering logs for kube-apiserver [d77f8585300428bd164c95feaab89b1335e86776e85ff381369861ae5657dd5a] ...
	I0701 14:40:00.739787 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d77f8585300428bd164c95feaab89b1335e86776e85ff381369861ae5657dd5a"
	I0701 14:40:00.804049 3774537 logs.go:123] Gathering logs for kube-apiserver [61d1f43c44b63951b2d28b886fc46704fb748c39efeb51e95d66e758e6b1b483] ...
	I0701 14:40:00.804080 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61d1f43c44b63951b2d28b886fc46704fb748c39efeb51e95d66e758e6b1b483"
	I0701 14:40:00.847571 3774537 logs.go:123] Gathering logs for kube-controller-manager [cad3b892e384c73b0b522749ba9ec06698a7feae0fc7e80f3803fe607d4810ce] ...
	I0701 14:40:00.847672 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cad3b892e384c73b0b522749ba9ec06698a7feae0fc7e80f3803fe607d4810ce"
	I0701 14:40:00.908762 3774537 logs.go:123] Gathering logs for container status ...
	I0701 14:40:00.908796 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 14:40:00.955437 3774537 logs.go:123] Gathering logs for dmesg ...
	I0701 14:40:00.955465 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 14:40:00.976943 3774537 logs.go:123] Gathering logs for describe nodes ...
	I0701 14:40:00.976971 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 14:40:01.264102 3774537 logs.go:123] Gathering logs for kube-proxy [95587abe2be5dafe9bc8249c75bf8c72dbea9fefad322ebcbe3d6344b430af3f] ...
	I0701 14:40:01.264143 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95587abe2be5dafe9bc8249c75bf8c72dbea9fefad322ebcbe3d6344b430af3f"
	I0701 14:40:01.328234 3774537 logs.go:123] Gathering logs for etcd [21a87bbb4816e8090d7cb1ceeb27b972fda5f8009ce284a524f93607b473d933] ...
	I0701 14:40:01.328266 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21a87bbb4816e8090d7cb1ceeb27b972fda5f8009ce284a524f93607b473d933"
	I0701 14:40:01.389608 3774537 logs.go:123] Gathering logs for kube-scheduler [ae6119e19a78edb4f37833ebf8f67a6500d59dc074573f5b47dbaea9faf2fe7d] ...
	I0701 14:40:01.389646 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae6119e19a78edb4f37833ebf8f67a6500d59dc074573f5b47dbaea9faf2fe7d"
	I0701 14:40:01.429245 3774537 logs.go:123] Gathering logs for kube-controller-manager [bc6844515f5e1ac8674f38e8bab7c00008d602ac45a65b4df477e29b47f52119] ...
	I0701 14:40:01.429281 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc6844515f5e1ac8674f38e8bab7c00008d602ac45a65b4df477e29b47f52119"
	I0701 14:40:01.466987 3774537 logs.go:123] Gathering logs for kindnet [3f404c1e27a970bd3d1753095eeeeacd481784246413c4ff68f82701e513c1ba] ...
	I0701 14:40:01.467020 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f404c1e27a970bd3d1753095eeeeacd481784246413c4ff68f82701e513c1ba"
	I0701 14:40:01.508640 3774537 logs.go:123] Gathering logs for CRI-O ...
	I0701 14:40:01.508668 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0701 14:40:04.078925 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0701 14:40:04.078949 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:04.078962 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:04.078965 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:04.094273 3774537 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0701 14:40:04.103846 3774537 system_pods.go:59] 26 kube-system pods found
	I0701 14:40:04.103891 3774537 system_pods.go:61] "coredns-7db6d8ff4d-ggtnh" [f7b1a325-7f10-4e72-af03-255134b88169] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0701 14:40:04.103901 3774537 system_pods.go:61] "coredns-7db6d8ff4d-tv8kl" [f1cb6c9c-3857-49b9-8eb6-c26cb64d2ba1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0701 14:40:04.103907 3774537 system_pods.go:61] "etcd-ha-767646" [8fc5d6c5-1fcd-49e5-9d57-3fd5f6ea1c96] Running
	I0701 14:40:04.103912 3774537 system_pods.go:61] "etcd-ha-767646-m02" [054be01f-55b0-444b-aad5-4b4a80c52690] Running
	I0701 14:40:04.103917 3774537 system_pods.go:61] "etcd-ha-767646-m03" [d4fd1cc2-d6e6-43bd-b5ae-ff03b9cc3dc7] Running
	I0701 14:40:04.103921 3774537 system_pods.go:61] "kindnet-7q2qb" [295d67c7-95ee-404e-9cad-2917ad62719f] Running
	I0701 14:40:04.103925 3774537 system_pods.go:61] "kindnet-hcsth" [ec12918e-9c57-4874-9e64-d92bc2a92ee1] Running
	I0701 14:40:04.103929 3774537 system_pods.go:61] "kindnet-nmzbs" [750eaa60-a8d0-40a5-a609-51ff2d3a2017] Running
	I0701 14:40:04.103933 3774537 system_pods.go:61] "kindnet-vp2jn" [6d6ff337-5a04-4219-ba5c-a7491a903a87] Running
	I0701 14:40:04.103939 3774537 system_pods.go:61] "kube-apiserver-ha-767646" [efb0aa7c-9838-4d27-b9b3-0d6f663d8692] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0701 14:40:04.103944 3774537 system_pods.go:61] "kube-apiserver-ha-767646-m02" [1f764b16-60fe-4a04-b456-8fc8720f7919] Running
	I0701 14:40:04.103955 3774537 system_pods.go:61] "kube-apiserver-ha-767646-m03" [8ff4a51a-e168-463d-870f-4c9a45935ac3] Running
	I0701 14:40:04.103962 3774537 system_pods.go:61] "kube-controller-manager-ha-767646" [5771068f-6419-4999-8d57-516849064a9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0701 14:40:04.103974 3774537 system_pods.go:61] "kube-controller-manager-ha-767646-m02" [e764ef2c-9ea2-4a5b-8fcf-c5ee803e49f0] Running
	I0701 14:40:04.103979 3774537 system_pods.go:61] "kube-controller-manager-ha-767646-m03" [7c88841c-8ca6-4059-a58b-391ce6c764a8] Running
	I0701 14:40:04.103982 3774537 system_pods.go:61] "kube-proxy-48fx2" [280caa88-d5cd-41b7-8088-e3602feb2e08] Running
	I0701 14:40:04.103986 3774537 system_pods.go:61] "kube-proxy-6gt25" [9440c1ab-9dcc-43ef-80ee-33fbb76a1808] Running
	I0701 14:40:04.103991 3774537 system_pods.go:61] "kube-proxy-dz99m" [20ccadc4-322b-4505-bf19-afa3844a23d3] Running
	I0701 14:40:04.103994 3774537 system_pods.go:61] "kube-proxy-s476n" [151468ea-30d3-4a85-ba6b-3b080a110ddc] Running
	I0701 14:40:04.104004 3774537 system_pods.go:61] "kube-scheduler-ha-767646" [232f42d2-99ef-46b0-b71d-15a01c015c60] Running
	I0701 14:40:04.104008 3774537 system_pods.go:61] "kube-scheduler-ha-767646-m02" [7300cf4d-8d81-433e-814a-63c23cf30f09] Running
	I0701 14:40:04.104013 3774537 system_pods.go:61] "kube-scheduler-ha-767646-m03" [55b8ea18-94bd-4a11-bbfd-6d93fe948bf5] Running
	I0701 14:40:04.104017 3774537 system_pods.go:61] "kube-vip-ha-767646" [ed89d62e-c168-4566-8251-f6c2e39cb37a] Running
	I0701 14:40:04.104020 3774537 system_pods.go:61] "kube-vip-ha-767646-m02" [11521b91-c961-454b-96da-69f84b4dc018] Running
	I0701 14:40:04.104024 3774537 system_pods.go:61] "kube-vip-ha-767646-m03" [64bad709-c831-4bf2-9604-fd2424cdcc22] Running
	I0701 14:40:04.104028 3774537 system_pods.go:61] "storage-provisioner" [0e97e688-e573-4cdf-aab8-c4b2aea4bba8] Running
	I0701 14:40:04.104037 3774537 system_pods.go:74] duration metric: took 4.152133996s to wait for pod list to return data ...
	I0701 14:40:04.104052 3774537 default_sa.go:34] waiting for default service account to be created ...
	I0701 14:40:04.104284 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0701 14:40:04.104298 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:04.104307 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:04.104313 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:04.107186 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:04.107418 3774537 default_sa.go:45] found service account: "default"
	I0701 14:40:04.107434 3774537 default_sa.go:55] duration metric: took 3.376197ms for default service account to be created ...
	I0701 14:40:04.107443 3774537 system_pods.go:116] waiting for k8s-apps to be running ...
	I0701 14:40:04.107499 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0701 14:40:04.107508 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:04.107515 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:04.107524 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:04.114397 3774537 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 14:40:04.124154 3774537 system_pods.go:86] 26 kube-system pods found
	I0701 14:40:04.124409 3774537 system_pods.go:89] "coredns-7db6d8ff4d-ggtnh" [f7b1a325-7f10-4e72-af03-255134b88169] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0701 14:40:04.124432 3774537 system_pods.go:89] "coredns-7db6d8ff4d-tv8kl" [f1cb6c9c-3857-49b9-8eb6-c26cb64d2ba1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0701 14:40:04.124441 3774537 system_pods.go:89] "etcd-ha-767646" [8fc5d6c5-1fcd-49e5-9d57-3fd5f6ea1c96] Running
	I0701 14:40:04.124446 3774537 system_pods.go:89] "etcd-ha-767646-m02" [054be01f-55b0-444b-aad5-4b4a80c52690] Running
	I0701 14:40:04.124452 3774537 system_pods.go:89] "etcd-ha-767646-m03" [d4fd1cc2-d6e6-43bd-b5ae-ff03b9cc3dc7] Running
	I0701 14:40:04.124460 3774537 system_pods.go:89] "kindnet-7q2qb" [295d67c7-95ee-404e-9cad-2917ad62719f] Running
	I0701 14:40:04.124465 3774537 system_pods.go:89] "kindnet-hcsth" [ec12918e-9c57-4874-9e64-d92bc2a92ee1] Running
	I0701 14:40:04.124472 3774537 system_pods.go:89] "kindnet-nmzbs" [750eaa60-a8d0-40a5-a609-51ff2d3a2017] Running
	I0701 14:40:04.124476 3774537 system_pods.go:89] "kindnet-vp2jn" [6d6ff337-5a04-4219-ba5c-a7491a903a87] Running
	I0701 14:40:04.124483 3774537 system_pods.go:89] "kube-apiserver-ha-767646" [efb0aa7c-9838-4d27-b9b3-0d6f663d8692] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0701 14:40:04.124489 3774537 system_pods.go:89] "kube-apiserver-ha-767646-m02" [1f764b16-60fe-4a04-b456-8fc8720f7919] Running
	I0701 14:40:04.124495 3774537 system_pods.go:89] "kube-apiserver-ha-767646-m03" [8ff4a51a-e168-463d-870f-4c9a45935ac3] Running
	I0701 14:40:04.124512 3774537 system_pods.go:89] "kube-controller-manager-ha-767646" [5771068f-6419-4999-8d57-516849064a9e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0701 14:40:04.124517 3774537 system_pods.go:89] "kube-controller-manager-ha-767646-m02" [e764ef2c-9ea2-4a5b-8fcf-c5ee803e49f0] Running
	I0701 14:40:04.124526 3774537 system_pods.go:89] "kube-controller-manager-ha-767646-m03" [7c88841c-8ca6-4059-a58b-391ce6c764a8] Running
	I0701 14:40:04.124531 3774537 system_pods.go:89] "kube-proxy-48fx2" [280caa88-d5cd-41b7-8088-e3602feb2e08] Running
	I0701 14:40:04.124536 3774537 system_pods.go:89] "kube-proxy-6gt25" [9440c1ab-9dcc-43ef-80ee-33fbb76a1808] Running
	I0701 14:40:04.124546 3774537 system_pods.go:89] "kube-proxy-dz99m" [20ccadc4-322b-4505-bf19-afa3844a23d3] Running
	I0701 14:40:04.124550 3774537 system_pods.go:89] "kube-proxy-s476n" [151468ea-30d3-4a85-ba6b-3b080a110ddc] Running
	I0701 14:40:04.124554 3774537 system_pods.go:89] "kube-scheduler-ha-767646" [232f42d2-99ef-46b0-b71d-15a01c015c60] Running
	I0701 14:40:04.124559 3774537 system_pods.go:89] "kube-scheduler-ha-767646-m02" [7300cf4d-8d81-433e-814a-63c23cf30f09] Running
	I0701 14:40:04.124567 3774537 system_pods.go:89] "kube-scheduler-ha-767646-m03" [55b8ea18-94bd-4a11-bbfd-6d93fe948bf5] Running
	I0701 14:40:04.124571 3774537 system_pods.go:89] "kube-vip-ha-767646" [ed89d62e-c168-4566-8251-f6c2e39cb37a] Running
	I0701 14:40:04.124574 3774537 system_pods.go:89] "kube-vip-ha-767646-m02" [11521b91-c961-454b-96da-69f84b4dc018] Running
	I0701 14:40:04.124578 3774537 system_pods.go:89] "kube-vip-ha-767646-m03" [64bad709-c831-4bf2-9604-fd2424cdcc22] Running
	I0701 14:40:04.124582 3774537 system_pods.go:89] "storage-provisioner" [0e97e688-e573-4cdf-aab8-c4b2aea4bba8] Running
	I0701 14:40:04.124589 3774537 system_pods.go:126] duration metric: took 17.140936ms to wait for k8s-apps to be running ...
	I0701 14:40:04.124602 3774537 system_svc.go:44] waiting for kubelet service to be running ....
	I0701 14:40:04.124659 3774537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 14:40:04.149476 3774537 system_svc.go:56] duration metric: took 24.864885ms WaitForService to wait for kubelet
	I0701 14:40:04.149548 3774537 kubeadm.go:576] duration metric: took 1m12.990796653s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 14:40:04.149584 3774537 node_conditions.go:102] verifying NodePressure condition ...
	I0701 14:40:04.149685 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0701 14:40:04.149709 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:04.149730 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:04.149752 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:04.152879 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:40:04.154354 3774537 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0701 14:40:04.154389 3774537 node_conditions.go:123] node cpu capacity is 2
	I0701 14:40:04.154403 3774537 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0701 14:40:04.154409 3774537 node_conditions.go:123] node cpu capacity is 2
	I0701 14:40:04.154413 3774537 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0701 14:40:04.154417 3774537 node_conditions.go:123] node cpu capacity is 2
	I0701 14:40:04.154422 3774537 node_conditions.go:105] duration metric: took 4.817582ms to run NodePressure ...
	I0701 14:40:04.154435 3774537 start.go:240] waiting for startup goroutines ...
	I0701 14:40:04.154462 3774537 start.go:254] writing updated cluster config ...
	I0701 14:40:04.157649 3774537 out.go:177] 
	I0701 14:40:04.160425 3774537 config.go:182] Loaded profile config "ha-767646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:40:04.160580 3774537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/config.json ...
	I0701 14:40:04.163601 3774537 out.go:177] * Starting "ha-767646-m04" worker node in "ha-767646" cluster
	I0701 14:40:04.166822 3774537 cache.go:121] Beginning downloading kic base image for docker with crio
	I0701 14:40:04.169371 3774537 out.go:177] * Pulling base image v0.0.44-1719413016-19142 ...
	I0701 14:40:04.171991 3774537 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0701 14:40:04.172026 3774537 cache.go:56] Caching tarball of preloaded images
	I0701 14:40:04.172078 3774537 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local docker daemon
	I0701 14:40:04.172138 3774537 preload.go:173] Found /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0701 14:40:04.172149 3774537 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0701 14:40:04.172278 3774537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/config.json ...
	I0701 14:40:04.188651 3774537 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local docker daemon, skipping pull
	I0701 14:40:04.188675 3774537 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d exists in daemon, skipping load
	I0701 14:40:04.188695 3774537 cache.go:194] Successfully downloaded all kic artifacts
	I0701 14:40:04.188724 3774537 start.go:360] acquireMachinesLock for ha-767646-m04: {Name:mk881468842b1cd01270c588475dbfd7115997bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 14:40:04.188785 3774537 start.go:364] duration metric: took 44.784µs to acquireMachinesLock for "ha-767646-m04"
	I0701 14:40:04.188805 3774537 start.go:96] Skipping create...Using existing machine configuration
	I0701 14:40:04.188811 3774537 fix.go:54] fixHost starting: m04
	I0701 14:40:04.189126 3774537 cli_runner.go:164] Run: docker container inspect ha-767646-m04 --format={{.State.Status}}
	I0701 14:40:04.206044 3774537 fix.go:112] recreateIfNeeded on ha-767646-m04: state=Stopped err=<nil>
	W0701 14:40:04.206074 3774537 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 14:40:04.209484 3774537 out.go:177] * Restarting existing docker container for "ha-767646-m04" ...
	I0701 14:40:04.211954 3774537 cli_runner.go:164] Run: docker start ha-767646-m04
	I0701 14:40:04.556403 3774537 cli_runner.go:164] Run: docker container inspect ha-767646-m04 --format={{.State.Status}}
	I0701 14:40:04.578839 3774537 kic.go:430] container "ha-767646-m04" state is running.
	I0701 14:40:04.579321 3774537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-767646-m04
	I0701 14:40:04.601769 3774537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/config.json ...
	I0701 14:40:04.602019 3774537 machine.go:94] provisionDockerMachine start ...
	I0701 14:40:04.602090 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646-m04
	I0701 14:40:04.625533 3774537 main.go:141] libmachine: Using SSH client type: native
	I0701 14:40:04.625782 3774537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 33970 <nil> <nil>}
	I0701 14:40:04.625797 3774537 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 14:40:04.626421 3774537 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0701 14:40:07.768554 3774537 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767646-m04
	
	I0701 14:40:07.768619 3774537 ubuntu.go:169] provisioning hostname "ha-767646-m04"
	I0701 14:40:07.768710 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646-m04
	I0701 14:40:07.786664 3774537 main.go:141] libmachine: Using SSH client type: native
	I0701 14:40:07.786938 3774537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 33970 <nil> <nil>}
	I0701 14:40:07.786958 3774537 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-767646-m04 && echo "ha-767646-m04" | sudo tee /etc/hostname
	I0701 14:40:07.937699 3774537 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-767646-m04
	
	I0701 14:40:07.937784 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646-m04
	I0701 14:40:07.955622 3774537 main.go:141] libmachine: Using SSH client type: native
	I0701 14:40:07.955866 3774537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 33970 <nil> <nil>}
	I0701 14:40:07.955887 3774537 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-767646-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-767646-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-767646-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 14:40:08.097348 3774537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 14:40:08.097373 3774537 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19166-3708336/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-3708336/.minikube}
	I0701 14:40:08.097388 3774537 ubuntu.go:177] setting up certificates
	I0701 14:40:08.097397 3774537 provision.go:84] configureAuth start
	I0701 14:40:08.097462 3774537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-767646-m04
	I0701 14:40:08.117198 3774537 provision.go:143] copyHostCerts
	I0701 14:40:08.117238 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19166-3708336/.minikube/cert.pem
	I0701 14:40:08.117270 3774537 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-3708336/.minikube/cert.pem, removing ...
	I0701 14:40:08.117276 3774537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-3708336/.minikube/cert.pem
	I0701 14:40:08.117352 3774537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-3708336/.minikube/cert.pem (1123 bytes)
	I0701 14:40:08.117433 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19166-3708336/.minikube/key.pem
	I0701 14:40:08.117449 3774537 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-3708336/.minikube/key.pem, removing ...
	I0701 14:40:08.117454 3774537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-3708336/.minikube/key.pem
	I0701 14:40:08.117479 3774537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-3708336/.minikube/key.pem (1675 bytes)
	I0701 14:40:08.117520 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.pem
	I0701 14:40:08.117534 3774537 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.pem, removing ...
	I0701 14:40:08.117538 3774537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.pem
	I0701 14:40:08.117561 3774537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.pem (1082 bytes)
	I0701 14:40:08.117607 3774537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca-key.pem org=jenkins.ha-767646-m04 san=[127.0.0.1 192.168.49.5 ha-767646-m04 localhost minikube]
	I0701 14:40:08.363236 3774537 provision.go:177] copyRemoteCerts
	I0701 14:40:08.363330 3774537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 14:40:08.363398 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646-m04
	I0701 14:40:08.381721 3774537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33970 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/ha-767646-m04/id_rsa Username:docker}
	I0701 14:40:08.482483 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0701 14:40:08.482542 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0701 14:40:08.506650 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0701 14:40:08.506736 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0701 14:40:08.532132 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0701 14:40:08.532191 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0701 14:40:08.556949 3774537 provision.go:87] duration metric: took 459.538262ms to configureAuth
	I0701 14:40:08.556977 3774537 ubuntu.go:193] setting minikube options for container-runtime
	I0701 14:40:08.557274 3774537 config.go:182] Loaded profile config "ha-767646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:40:08.557384 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646-m04
	I0701 14:40:08.573874 3774537 main.go:141] libmachine: Using SSH client type: native
	I0701 14:40:08.574160 3774537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 33970 <nil> <nil>}
	I0701 14:40:08.574179 3774537 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0701 14:40:08.866254 3774537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0701 14:40:08.866276 3774537 machine.go:97] duration metric: took 4.264238805s to provisionDockerMachine
	I0701 14:40:08.866295 3774537 start.go:293] postStartSetup for "ha-767646-m04" (driver="docker")
	I0701 14:40:08.866308 3774537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 14:40:08.866380 3774537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 14:40:08.866427 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646-m04
	I0701 14:40:08.888765 3774537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33970 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/ha-767646-m04/id_rsa Username:docker}
	I0701 14:40:08.990792 3774537 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 14:40:08.994457 3774537 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 14:40:08.994504 3774537 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 14:40:08.994515 3774537 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 14:40:08.994525 3774537 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0701 14:40:08.994535 3774537 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-3708336/.minikube/addons for local assets ...
	I0701 14:40:08.994611 3774537 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-3708336/.minikube/files for local assets ...
	I0701 14:40:08.994702 3774537 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem -> 37137252.pem in /etc/ssl/certs
	I0701 14:40:08.994721 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem -> /etc/ssl/certs/37137252.pem
	I0701 14:40:08.994836 3774537 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 14:40:09.004727 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem --> /etc/ssl/certs/37137252.pem (1708 bytes)
	I0701 14:40:09.031054 3774537 start.go:296] duration metric: took 164.743099ms for postStartSetup
	I0701 14:40:09.031143 3774537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 14:40:09.031190 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646-m04
	I0701 14:40:09.048464 3774537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33970 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/ha-767646-m04/id_rsa Username:docker}
	I0701 14:40:09.142178 3774537 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 14:40:09.148165 3774537 fix.go:56] duration metric: took 4.959347154s for fixHost
	I0701 14:40:09.148234 3774537 start.go:83] releasing machines lock for "ha-767646-m04", held for 4.959439503s
	I0701 14:40:09.148341 3774537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-767646-m04
	I0701 14:40:09.168495 3774537 out.go:177] * Found network options:
	I0701 14:40:09.170880 3774537 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0701 14:40:09.173127 3774537 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 14:40:09.173155 3774537 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 14:40:09.173181 3774537 proxy.go:119] fail to check proxy env: Error ip not in block
	W0701 14:40:09.173195 3774537 proxy.go:119] fail to check proxy env: Error ip not in block
	I0701 14:40:09.173267 3774537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0701 14:40:09.173314 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646-m04
	I0701 14:40:09.173580 3774537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 14:40:09.173630 3774537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646-m04
	I0701 14:40:09.193581 3774537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33970 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/ha-767646-m04/id_rsa Username:docker}
	I0701 14:40:09.208324 3774537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33970 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/ha-767646-m04/id_rsa Username:docker}
	I0701 14:40:09.474655 3774537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 14:40:09.479271 3774537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 14:40:09.488865 3774537 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0701 14:40:09.488952 3774537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 14:40:09.498809 3774537 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0701 14:40:09.498835 3774537 start.go:494] detecting cgroup driver to use...
	I0701 14:40:09.498868 3774537 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0701 14:40:09.498928 3774537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 14:40:09.511071 3774537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 14:40:09.522622 3774537 docker.go:217] disabling cri-docker service (if available) ...
	I0701 14:40:09.522705 3774537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0701 14:40:09.536006 3774537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0701 14:40:09.548161 3774537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0701 14:40:09.648998 3774537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0701 14:40:09.754139 3774537 docker.go:233] disabling docker service ...
	I0701 14:40:09.754240 3774537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 14:40:09.767868 3774537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 14:40:09.780490 3774537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 14:40:09.874752 3774537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 14:40:09.968932 3774537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 14:40:09.981058 3774537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 14:40:09.997976 3774537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0701 14:40:09.998055 3774537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:40:10.020685 3774537 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0701 14:40:10.020776 3774537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:40:10.037791 3774537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:40:10.049954 3774537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:40:10.060453 3774537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 14:40:10.070767 3774537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:40:10.082311 3774537 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:40:10.095018 3774537 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 14:40:10.105896 3774537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 14:40:10.116648 3774537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 14:40:10.127185 3774537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 14:40:10.216022 3774537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0701 14:40:10.351301 3774537 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0701 14:40:10.351374 3774537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0701 14:40:10.355269 3774537 start.go:562] Will wait 60s for crictl version
	I0701 14:40:10.355334 3774537 ssh_runner.go:195] Run: which crictl
	I0701 14:40:10.359406 3774537 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 14:40:10.407464 3774537 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0701 14:40:10.407582 3774537 ssh_runner.go:195] Run: crio --version
	I0701 14:40:10.450139 3774537 ssh_runner.go:195] Run: crio --version
	I0701 14:40:10.517327 3774537 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.24.6 ...
	I0701 14:40:10.519535 3774537 out.go:177]   - env NO_PROXY=192.168.49.2
	I0701 14:40:10.522062 3774537 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0701 14:40:10.524394 3774537 cli_runner.go:164] Run: docker network inspect ha-767646 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 14:40:10.539076 3774537 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0701 14:40:10.542874 3774537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 14:40:10.553883 3774537 mustload.go:65] Loading cluster: ha-767646
	I0701 14:40:10.554126 3774537 config.go:182] Loaded profile config "ha-767646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:40:10.554378 3774537 cli_runner.go:164] Run: docker container inspect ha-767646 --format={{.State.Status}}
	I0701 14:40:10.574179 3774537 host.go:66] Checking if "ha-767646" exists ...
	I0701 14:40:10.574471 3774537 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646 for IP: 192.168.49.5
	I0701 14:40:10.574487 3774537 certs.go:194] generating shared ca certs ...
	I0701 14:40:10.574502 3774537 certs.go:226] acquiring lock for ca certs: {Name:mkef61a10d340f62d4856e4c226678a7bd970ee7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 14:40:10.574629 3774537 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.key
	I0701 14:40:10.574675 3774537 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.key
	I0701 14:40:10.574689 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0701 14:40:10.574702 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0701 14:40:10.574719 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0701 14:40:10.574729 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0701 14:40:10.574788 3774537 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/3713725.pem (1338 bytes)
	W0701 14:40:10.574819 3774537 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/3713725_empty.pem, impossibly tiny 0 bytes
	I0701 14:40:10.574831 3774537 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 14:40:10.574856 3774537 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem (1082 bytes)
	I0701 14:40:10.574881 3774537 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem (1123 bytes)
	I0701 14:40:10.575055 3774537 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem (1675 bytes)
	I0701 14:40:10.575114 3774537 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem (1708 bytes)
	I0701 14:40:10.575155 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0701 14:40:10.575170 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/3713725.pem -> /usr/share/ca-certificates/3713725.pem
	I0701 14:40:10.575189 3774537 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem -> /usr/share/ca-certificates/37137252.pem
	I0701 14:40:10.575207 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 14:40:10.602786 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 14:40:10.627428 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 14:40:10.652246 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 14:40:10.688804 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 14:40:10.717171 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/3713725.pem --> /usr/share/ca-certificates/3713725.pem (1338 bytes)
	I0701 14:40:10.745592 3774537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem --> /usr/share/ca-certificates/37137252.pem (1708 bytes)
	I0701 14:40:10.778894 3774537 ssh_runner.go:195] Run: openssl version
	I0701 14:40:10.785495 3774537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/37137252.pem && ln -fs /usr/share/ca-certificates/37137252.pem /etc/ssl/certs/37137252.pem"
	I0701 14:40:10.797470 3774537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/37137252.pem
	I0701 14:40:10.801167 3774537 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 14:25 /usr/share/ca-certificates/37137252.pem
	I0701 14:40:10.801230 3774537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/37137252.pem
	I0701 14:40:10.808225 3774537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/37137252.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 14:40:10.817950 3774537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 14:40:10.828588 3774537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 14:40:10.832235 3774537 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 14:16 /usr/share/ca-certificates/minikubeCA.pem
	I0701 14:40:10.832301 3774537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 14:40:10.839328 3774537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 14:40:10.848607 3774537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3713725.pem && ln -fs /usr/share/ca-certificates/3713725.pem /etc/ssl/certs/3713725.pem"
	I0701 14:40:10.858259 3774537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3713725.pem
	I0701 14:40:10.862007 3774537 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 14:25 /usr/share/ca-certificates/3713725.pem
	I0701 14:40:10.862081 3774537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3713725.pem
	I0701 14:40:10.869632 3774537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3713725.pem /etc/ssl/certs/51391683.0"
	I0701 14:40:10.880150 3774537 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 14:40:10.885418 3774537 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0701 14:40:10.885482 3774537 kubeadm.go:928] updating node {m04 192.168.49.5 0 v1.30.2  false true} ...
	I0701 14:40:10.885579 3774537 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-767646-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-767646 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 14:40:10.885655 3774537 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0701 14:40:10.896116 3774537 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 14:40:10.896190 3774537 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0701 14:40:10.906558 3774537 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0701 14:40:10.925740 3774537 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 14:40:10.944251 3774537 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0701 14:40:10.948008 3774537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 14:40:10.959050 3774537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 14:40:11.054348 3774537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 14:40:11.067790 3774537 start.go:234] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}
	I0701 14:40:11.068149 3774537 config.go:182] Loaded profile config "ha-767646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:40:11.071402 3774537 out.go:177] * Verifying Kubernetes components...
	I0701 14:40:11.073882 3774537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 14:40:11.167125 3774537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 14:40:11.179937 3774537 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19166-3708336/kubeconfig
	I0701 14:40:11.180211 3774537 kapi.go:59] client config for ha-767646: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/client.crt", KeyFile:"/home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/ha-767646/client.key", CAFile:"/home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x179ece0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0701 14:40:11.180277 3774537 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0701 14:40:11.180493 3774537 node_ready.go:35] waiting up to 6m0s for node "ha-767646-m04" to be "Ready" ...
	I0701 14:40:11.180566 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m04
	I0701 14:40:11.180576 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:11.180584 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:11.180589 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:11.183379 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:11.680977 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m04
	I0701 14:40:11.681000 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:11.681038 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:11.681047 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:11.683927 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:12.181631 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m04
	I0701 14:40:12.181698 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:12.181721 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:12.181743 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:12.184751 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:12.682608 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m04
	I0701 14:40:12.682640 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:12.682650 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:12.682656 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:12.701124 3774537 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0701 14:40:13.181560 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m04
	I0701 14:40:13.181631 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:13.181671 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:13.181702 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:13.186073 3774537 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 14:40:13.187675 3774537 node_ready.go:53] node "ha-767646-m04" has status "Ready":"Unknown"
	I0701 14:40:13.681279 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m04
	I0701 14:40:13.681362 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:13.681394 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:13.681430 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:13.685858 3774537 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0701 14:40:14.180797 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m04
	I0701 14:40:14.180816 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:14.180825 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:14.180829 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:14.191653 3774537 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0701 14:40:14.681389 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m04
	I0701 14:40:14.681413 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:14.681422 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:14.681428 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:14.684151 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:15.181482 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m04
	I0701 14:40:15.181518 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:15.181529 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:15.181536 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:15.184362 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:15.680739 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m04
	I0701 14:40:15.680759 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:15.680771 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:15.680775 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:15.683807 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:40:15.684495 3774537 node_ready.go:53] node "ha-767646-m04" has status "Ready":"Unknown"
	I0701 14:40:16.181636 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m04
	I0701 14:40:16.181663 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:16.181672 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:16.181676 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:16.184495 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:16.680754 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m04
	I0701 14:40:16.680777 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:16.680786 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:16.680790 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:16.683628 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:17.181651 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m04
	I0701 14:40:17.181673 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:17.181682 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:17.181685 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:17.184574 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:17.681286 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m04
	I0701 14:40:17.681307 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:17.681316 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:17.681320 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:17.684107 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:17.685152 3774537 node_ready.go:53] node "ha-767646-m04" has status "Ready":"Unknown"
	I0701 14:40:18.180824 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m04
	I0701 14:40:18.180847 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:18.180857 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:18.180862 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:18.183428 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:18.184058 3774537 node_ready.go:49] node "ha-767646-m04" has status "Ready":"True"
	I0701 14:40:18.184079 3774537 node_ready.go:38] duration metric: took 7.00356613s for node "ha-767646-m04" to be "Ready" ...
	I0701 14:40:18.184090 3774537 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 14:40:18.184155 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0701 14:40:18.184167 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:18.184175 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:18.184185 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:18.189790 3774537 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0701 14:40:18.196897 3774537 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ggtnh" in "kube-system" namespace to be "Ready" ...
	I0701 14:40:18.196997 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:18.197008 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:18.197048 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:18.197053 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:18.199874 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:18.200492 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:18.200502 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:18.200511 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:18.200515 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:18.202983 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:18.697801 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:18.697824 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:18.697834 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:18.697839 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:18.700615 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:18.701310 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:18.701328 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:18.701336 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:18.701340 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:18.703735 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:19.197814 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:19.197844 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:19.197857 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:19.197865 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:19.201041 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:40:19.201978 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:19.201998 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:19.202008 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:19.202012 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:19.204598 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:19.697173 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:19.697198 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:19.697208 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:19.697212 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:19.700085 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:19.700952 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:19.700969 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:19.700979 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:19.700982 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:19.703507 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:20.197400 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:20.197425 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:20.197439 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:20.197443 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:20.200486 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:40:20.201405 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:20.201425 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:20.201433 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:20.201438 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:20.204277 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:20.204867 3774537 pod_ready.go:102] pod "coredns-7db6d8ff4d-ggtnh" in "kube-system" namespace has status "Ready":"False"
	I0701 14:40:20.697198 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:20.697221 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:20.697237 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:20.697242 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:20.700227 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:20.700947 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:20.700963 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:20.700972 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:20.700976 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:20.703418 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:21.197711 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:21.197736 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:21.197746 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:21.197750 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:21.200602 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:21.201485 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:21.201502 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:21.201512 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:21.201518 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:21.211781 3774537 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0701 14:40:21.697692 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:21.697711 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:21.697721 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:21.697727 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:21.700686 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:21.701445 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:21.701465 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:21.701474 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:21.701479 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:21.704055 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:22.197457 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:22.197482 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:22.197492 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:22.197497 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:22.200303 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:22.200963 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:22.200974 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:22.200983 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:22.200987 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:22.203604 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:22.697191 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:22.697211 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:22.697220 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:22.697223 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:22.699972 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:22.700747 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:22.700763 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:22.700788 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:22.700797 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:22.703285 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:22.703893 3774537 pod_ready.go:102] pod "coredns-7db6d8ff4d-ggtnh" in "kube-system" namespace has status "Ready":"False"
	I0701 14:40:23.197140 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:23.197164 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:23.197174 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:23.197178 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:23.200215 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:40:23.200969 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:23.200985 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:23.200995 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:23.200999 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:23.203567 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:23.697792 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:23.697822 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:23.697832 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:23.697835 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:23.700804 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:23.701819 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:23.701834 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:23.701843 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:23.701849 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:23.704559 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:24.198039 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:24.198115 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:24.198137 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:24.198157 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:24.201275 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:40:24.202573 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:24.202639 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:24.202663 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:24.202684 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:24.205516 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:24.697187 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:24.697296 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:24.697320 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:24.697358 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:24.700200 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:24.701193 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:24.701263 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:24.701286 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:24.701340 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:24.704081 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:24.704762 3774537 pod_ready.go:102] pod "coredns-7db6d8ff4d-ggtnh" in "kube-system" namespace has status "Ready":"False"
	I0701 14:40:25.197481 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:25.197504 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:25.197513 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:25.197519 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:25.200640 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:40:25.201537 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:25.201554 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:25.201563 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:25.201603 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:25.204200 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:25.697357 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:25.697381 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:25.697390 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:25.697396 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:25.700373 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:25.701126 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:25.701178 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:25.701201 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:25.701223 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:25.703547 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:26.197864 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:26.197940 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:26.197964 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:26.197983 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:26.201139 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:40:26.202222 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:26.202239 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:26.202249 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:26.202254 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:26.205118 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:26.697105 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:26.697125 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:26.697133 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:26.697138 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:26.700616 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:40:26.701775 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:26.701795 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:26.701832 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:26.701844 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:26.713685 3774537 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0701 14:40:26.714519 3774537 pod_ready.go:102] pod "coredns-7db6d8ff4d-ggtnh" in "kube-system" namespace has status "Ready":"False"
	I0701 14:40:27.197796 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:27.197818 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:27.197826 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:27.197829 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:27.200715 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:27.201640 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:27.201657 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:27.201668 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:27.201673 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:27.204295 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:27.697773 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:27.697796 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:27.697806 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:27.697812 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:27.700548 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:27.701373 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:27.701392 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:27.701402 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:27.701407 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:27.703909 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:28.197791 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:28.197813 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:28.197824 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:28.197827 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:28.200758 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:28.201476 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:28.201495 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:28.201505 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:28.201509 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:28.203917 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:28.697209 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:28.697229 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:28.697239 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:28.697244 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:28.700309 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:40:28.701049 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:28.701067 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:28.701075 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:28.701080 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:28.703446 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:29.197970 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:29.197992 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:29.198001 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:29.198005 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:29.200925 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:29.202083 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:29.202140 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:29.202150 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:29.202154 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:29.205229 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:40:29.206093 3774537 pod_ready.go:102] pod "coredns-7db6d8ff4d-ggtnh" in "kube-system" namespace has status "Ready":"False"
	I0701 14:40:29.697925 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:29.697951 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:29.697960 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:29.697966 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:29.700883 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:29.701882 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:29.701900 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:29.701909 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:29.701914 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:29.705250 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:40:30.197270 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:30.197296 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:30.197307 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:30.197311 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:30.200307 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:30.201106 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:30.201126 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:30.201134 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:30.201138 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:30.204840 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:40:30.697997 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:30.698019 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:30.698029 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:30.698035 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:30.700870 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:30.701875 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:30.701896 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:30.701906 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:30.701911 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:30.704332 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:31.197190 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:31.197214 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:31.197223 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:31.197228 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:31.199882 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:31.201040 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:31.201056 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:31.201065 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:31.201068 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:31.204073 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:31.697420 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ggtnh
	I0701 14:40:31.697441 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:31.697451 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:31.697457 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:31.700258 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:31.701061 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:31.701081 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:31.701092 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:31.701120 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:31.704200 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:40:31.704893 3774537 pod_ready.go:97] node "ha-767646" hosting pod "coredns-7db6d8ff4d-ggtnh" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-767646" has status "Ready":"Unknown"
	I0701 14:40:31.704917 3774537 pod_ready.go:81] duration metric: took 13.507985273s for pod "coredns-7db6d8ff4d-ggtnh" in "kube-system" namespace to be "Ready" ...
	E0701 14:40:31.704929 3774537 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-767646" hosting pod "coredns-7db6d8ff4d-ggtnh" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-767646" has status "Ready":"Unknown"
	I0701 14:40:31.704937 3774537 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-tv8kl" in "kube-system" namespace to be "Ready" ...
	I0701 14:40:31.705007 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-tv8kl
	I0701 14:40:31.705045 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:31.705054 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:31.705060 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:31.707594 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:31.708264 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:31.708276 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:31.708284 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:31.708289 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:31.710881 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:31.711759 3774537 pod_ready.go:97] node "ha-767646" hosting pod "coredns-7db6d8ff4d-tv8kl" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-767646" has status "Ready":"Unknown"
	I0701 14:40:31.711786 3774537 pod_ready.go:81] duration metric: took 6.836392ms for pod "coredns-7db6d8ff4d-tv8kl" in "kube-system" namespace to be "Ready" ...
	E0701 14:40:31.711814 3774537 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-767646" hosting pod "coredns-7db6d8ff4d-tv8kl" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-767646" has status "Ready":"Unknown"
	I0701 14:40:31.711829 3774537 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-767646" in "kube-system" namespace to be "Ready" ...
	I0701 14:40:31.711901 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-767646
	I0701 14:40:31.711910 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:31.711918 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:31.711923 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:31.714425 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:31.715167 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:31.715190 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:31.715200 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:31.715206 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:31.717730 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:31.718319 3774537 pod_ready.go:97] node "ha-767646" hosting pod "etcd-ha-767646" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-767646" has status "Ready":"Unknown"
	I0701 14:40:31.718339 3774537 pod_ready.go:81] duration metric: took 6.503178ms for pod "etcd-ha-767646" in "kube-system" namespace to be "Ready" ...
	E0701 14:40:31.718350 3774537 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-767646" hosting pod "etcd-ha-767646" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-767646" has status "Ready":"Unknown"
	I0701 14:40:31.718361 3774537 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-767646-m02" in "kube-system" namespace to be "Ready" ...
	I0701 14:40:31.718430 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-767646-m02
	I0701 14:40:31.718442 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:31.718451 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:31.718456 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:31.721107 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:31.721739 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m02
	I0701 14:40:31.721761 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:31.721771 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:31.721775 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:31.724416 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:31.725162 3774537 pod_ready.go:92] pod "etcd-ha-767646-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 14:40:31.725186 3774537 pod_ready.go:81] duration metric: took 6.808437ms for pod "etcd-ha-767646-m02" in "kube-system" namespace to be "Ready" ...
	I0701 14:40:31.725207 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-767646" in "kube-system" namespace to be "Ready" ...
	I0701 14:40:31.725276 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-767646
	I0701 14:40:31.725286 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:31.725294 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:31.725299 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:31.727993 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:31.728957 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:31.728972 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:31.728981 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:31.728987 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:31.731572 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:31.732164 3774537 pod_ready.go:97] node "ha-767646" hosting pod "kube-apiserver-ha-767646" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-767646" has status "Ready":"Unknown"
	I0701 14:40:31.732186 3774537 pod_ready.go:81] duration metric: took 6.972614ms for pod "kube-apiserver-ha-767646" in "kube-system" namespace to be "Ready" ...
	E0701 14:40:31.732211 3774537 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-767646" hosting pod "kube-apiserver-ha-767646" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-767646" has status "Ready":"Unknown"
	I0701 14:40:31.732224 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-767646-m02" in "kube-system" namespace to be "Ready" ...
	I0701 14:40:31.897438 3774537 request.go:629] Waited for 165.131731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-767646-m02
	I0701 14:40:31.897549 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-767646-m02
	I0701 14:40:31.897562 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:31.897572 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:31.897577 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:31.900297 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:32.098372 3774537 request.go:629] Waited for 197.357288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646-m02
	I0701 14:40:32.098491 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m02
	I0701 14:40:32.098501 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:32.098511 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:32.098515 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:32.101237 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:32.102135 3774537 pod_ready.go:92] pod "kube-apiserver-ha-767646-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 14:40:32.102198 3774537 pod_ready.go:81] duration metric: took 369.96129ms for pod "kube-apiserver-ha-767646-m02" in "kube-system" namespace to be "Ready" ...
	I0701 14:40:32.102226 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-767646" in "kube-system" namespace to be "Ready" ...
	I0701 14:40:32.298095 3774537 request.go:629] Waited for 195.777621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-767646
	I0701 14:40:32.298169 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-767646
	I0701 14:40:32.298180 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:32.298198 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:32.298206 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:32.301003 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:32.498224 3774537 request.go:629] Waited for 196.353701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:32.498322 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:32.498336 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:32.498351 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:32.498360 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:32.501414 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:40:32.502162 3774537 pod_ready.go:97] node "ha-767646" hosting pod "kube-controller-manager-ha-767646" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-767646" has status "Ready":"Unknown"
	I0701 14:40:32.502198 3774537 pod_ready.go:81] duration metric: took 399.950354ms for pod "kube-controller-manager-ha-767646" in "kube-system" namespace to be "Ready" ...
	E0701 14:40:32.502210 3774537 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-767646" hosting pod "kube-controller-manager-ha-767646" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-767646" has status "Ready":"Unknown"
	I0701 14:40:32.502227 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-767646-m02" in "kube-system" namespace to be "Ready" ...
	I0701 14:40:32.697564 3774537 request.go:629] Waited for 195.244768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-767646-m02
	I0701 14:40:32.697623 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-767646-m02
	I0701 14:40:32.697630 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:32.697638 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:32.697646 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:32.700601 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:32.897701 3774537 request.go:629] Waited for 196.300696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646-m02
	I0701 14:40:32.897780 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m02
	I0701 14:40:32.897808 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:32.897817 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:32.897820 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:32.900490 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:32.900998 3774537 pod_ready.go:92] pod "kube-controller-manager-ha-767646-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 14:40:32.901037 3774537 pod_ready.go:81] duration metric: took 398.796456ms for pod "kube-controller-manager-ha-767646-m02" in "kube-system" namespace to be "Ready" ...
	I0701 14:40:32.901056 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6gt25" in "kube-system" namespace to be "Ready" ...
	I0701 14:40:33.098313 3774537 request.go:629] Waited for 197.141984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gt25
	I0701 14:40:33.098393 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6gt25
	I0701 14:40:33.098406 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:33.098416 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:33.098421 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:33.101302 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:33.298382 3774537 request.go:629] Waited for 196.2839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:33.298449 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:33.298470 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:33.298479 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:33.298483 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:33.301176 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:33.302045 3774537 pod_ready.go:97] node "ha-767646" hosting pod "kube-proxy-6gt25" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-767646" has status "Ready":"Unknown"
	I0701 14:40:33.302073 3774537 pod_ready.go:81] duration metric: took 401.009311ms for pod "kube-proxy-6gt25" in "kube-system" namespace to be "Ready" ...
	E0701 14:40:33.302084 3774537 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-767646" hosting pod "kube-proxy-6gt25" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-767646" has status "Ready":"Unknown"
	I0701 14:40:33.302123 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dz99m" in "kube-system" namespace to be "Ready" ...
	I0701 14:40:33.498421 3774537 request.go:629] Waited for 196.195308ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dz99m
	I0701 14:40:33.498514 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dz99m
	I0701 14:40:33.498527 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:33.498536 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:33.498541 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:33.501392 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:33.698168 3774537 request.go:629] Waited for 196.151582ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646-m04
	I0701 14:40:33.698227 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m04
	I0701 14:40:33.698232 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:33.698241 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:33.698247 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:33.700986 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:33.701584 3774537 pod_ready.go:92] pod "kube-proxy-dz99m" in "kube-system" namespace has status "Ready":"True"
	I0701 14:40:33.701604 3774537 pod_ready.go:81] duration metric: took 399.467314ms for pod "kube-proxy-dz99m" in "kube-system" namespace to be "Ready" ...
	I0701 14:40:33.701630 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s476n" in "kube-system" namespace to be "Ready" ...
	I0701 14:40:33.897414 3774537 request.go:629] Waited for 195.713564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s476n
	I0701 14:40:33.897502 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s476n
	I0701 14:40:33.897516 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:33.897527 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:33.897532 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:33.904144 3774537 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0701 14:40:34.098155 3774537 request.go:629] Waited for 193.323748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646-m02
	I0701 14:40:34.098236 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m02
	I0701 14:40:34.098252 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:34.098279 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:34.098283 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:34.100982 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:34.101908 3774537 pod_ready.go:92] pod "kube-proxy-s476n" in "kube-system" namespace has status "Ready":"True"
	I0701 14:40:34.101956 3774537 pod_ready.go:81] duration metric: took 400.311802ms for pod "kube-proxy-s476n" in "kube-system" namespace to be "Ready" ...
	I0701 14:40:34.101999 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-767646" in "kube-system" namespace to be "Ready" ...
	I0701 14:40:34.297830 3774537 request.go:629] Waited for 195.747928ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-767646
	I0701 14:40:34.297939 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-767646
	I0701 14:40:34.297953 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:34.297963 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:34.297967 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:34.300624 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:34.497448 3774537 request.go:629] Waited for 196.118803ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:34.497503 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646
	I0701 14:40:34.497510 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:34.497518 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:34.497525 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:34.500230 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:34.500898 3774537 pod_ready.go:97] node "ha-767646" hosting pod "kube-scheduler-ha-767646" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-767646" has status "Ready":"Unknown"
	I0701 14:40:34.500920 3774537 pod_ready.go:81] duration metric: took 398.896764ms for pod "kube-scheduler-ha-767646" in "kube-system" namespace to be "Ready" ...
	E0701 14:40:34.500931 3774537 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-767646" hosting pod "kube-scheduler-ha-767646" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-767646" has status "Ready":"Unknown"
	I0701 14:40:34.500939 3774537 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-767646-m02" in "kube-system" namespace to be "Ready" ...
	I0701 14:40:34.697854 3774537 request.go:629] Waited for 196.846211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-767646-m02
	I0701 14:40:34.697935 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-767646-m02
	I0701 14:40:34.697967 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:34.697977 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:34.697996 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:34.700846 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:34.897971 3774537 request.go:629] Waited for 196.295887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-767646-m02
	I0701 14:40:34.898025 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-767646-m02
	I0701 14:40:34.898035 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:34.898045 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:34.898050 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:34.900689 3774537 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0701 14:40:34.901315 3774537 pod_ready.go:92] pod "kube-scheduler-ha-767646-m02" in "kube-system" namespace has status "Ready":"True"
	I0701 14:40:34.901339 3774537 pod_ready.go:81] duration metric: took 400.391622ms for pod "kube-scheduler-ha-767646-m02" in "kube-system" namespace to be "Ready" ...
	I0701 14:40:34.901367 3774537 pod_ready.go:38] duration metric: took 16.717266585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 14:40:34.901388 3774537 system_svc.go:44] waiting for kubelet service to be running ....
	I0701 14:40:34.901473 3774537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 14:40:34.913301 3774537 system_svc.go:56] duration metric: took 11.907161ms WaitForService to wait for kubelet
	I0701 14:40:34.913330 3774537 kubeadm.go:576] duration metric: took 23.845143784s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 14:40:34.913378 3774537 node_conditions.go:102] verifying NodePressure condition ...
	I0701 14:40:35.097673 3774537 request.go:629] Waited for 184.205783ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0701 14:40:35.097726 3774537 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0701 14:40:35.097732 3774537 round_trippers.go:469] Request Headers:
	I0701 14:40:35.097740 3774537 round_trippers.go:473]     Accept: application/json, */*
	I0701 14:40:35.097752 3774537 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0701 14:40:35.101047 3774537 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0701 14:40:35.102325 3774537 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0701 14:40:35.102355 3774537 node_conditions.go:123] node cpu capacity is 2
	I0701 14:40:35.102365 3774537 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0701 14:40:35.102371 3774537 node_conditions.go:123] node cpu capacity is 2
	I0701 14:40:35.102376 3774537 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0701 14:40:35.102381 3774537 node_conditions.go:123] node cpu capacity is 2
	I0701 14:40:35.102386 3774537 node_conditions.go:105] duration metric: took 189.001899ms to run NodePressure ...
	I0701 14:40:35.102399 3774537 start.go:240] waiting for startup goroutines ...
	I0701 14:40:35.102420 3774537 start.go:254] writing updated cluster config ...
	I0701 14:40:35.102755 3774537 ssh_runner.go:195] Run: rm -f paused
	I0701 14:40:35.162327 3774537 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0701 14:40:35.167153 3774537 out.go:177] * Done! kubectl is now configured to use "ha-767646" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 01 14:39:59 ha-767646 crio[638]: time="2024-07-01 14:39:59.892596175Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 01 14:39:59 ha-767646 crio[638]: time="2024-07-01 14:39:59.892631154Z" level=info msg="Updated default CNI network name to kindnet"
	Jul 01 14:39:59 ha-767646 crio[638]: time="2024-07-01 14:39:59.892647761Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Jul 01 14:39:59 ha-767646 crio[638]: time="2024-07-01 14:39:59.895503083Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 01 14:39:59 ha-767646 crio[638]: time="2024-07-01 14:39:59.895536290Z" level=info msg="Updated default CNI network name to kindnet"
	Jul 01 14:40:00 ha-767646 crio[638]: time="2024-07-01 14:40:00.128162561Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d711529d-620e-413a-9278-a77284cefb2c name=/runtime.v1.ImageService/ImageStatus
	Jul 01 14:40:00 ha-767646 crio[638]: time="2024-07-01 14:40:00.128447011Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d711529d-620e-413a-9278-a77284cefb2c name=/runtime.v1.ImageService/ImageStatus
	Jul 01 14:40:00 ha-767646 crio[638]: time="2024-07-01 14:40:00.129235802Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e7cd4d98-0c5e-47a9-ad9c-17dc1fe5f392 name=/runtime.v1.ImageService/ImageStatus
	Jul 01 14:40:00 ha-767646 crio[638]: time="2024-07-01 14:40:00.129455833Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2 gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944],Size_:29037500,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e7cd4d98-0c5e-47a9-ad9c-17dc1fe5f392 name=/runtime.v1.ImageService/ImageStatus
	Jul 01 14:40:00 ha-767646 crio[638]: time="2024-07-01 14:40:00.130816060Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=474f714b-ee96-45bf-b26b-049e33f6e73f name=/runtime.v1.RuntimeService/CreateContainer
	Jul 01 14:40:00 ha-767646 crio[638]: time="2024-07-01 14:40:00.130932729Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 01 14:40:00 ha-767646 crio[638]: time="2024-07-01 14:40:00.231464757Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6d8de77c8073a816092157f5ebc1a7988913a7fa4074677502a5b41ee329ff39/merged/etc/passwd: no such file or directory"
	Jul 01 14:40:00 ha-767646 crio[638]: time="2024-07-01 14:40:00.233650198Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6d8de77c8073a816092157f5ebc1a7988913a7fa4074677502a5b41ee329ff39/merged/etc/group: no such file or directory"
	Jul 01 14:40:00 ha-767646 crio[638]: time="2024-07-01 14:40:00.343848898Z" level=info msg="Created container e5e473b88e00971249a66df1ea453e000ff57e1f0d6feb884f72eb2b5e064c10: kube-system/storage-provisioner/storage-provisioner" id=474f714b-ee96-45bf-b26b-049e33f6e73f name=/runtime.v1.RuntimeService/CreateContainer
	Jul 01 14:40:00 ha-767646 crio[638]: time="2024-07-01 14:40:00.344471706Z" level=info msg="Starting container: e5e473b88e00971249a66df1ea453e000ff57e1f0d6feb884f72eb2b5e064c10" id=c818f10a-363f-457d-9822-1aab277625b3 name=/runtime.v1.RuntimeService/StartContainer
	Jul 01 14:40:00 ha-767646 crio[638]: time="2024-07-01 14:40:00.357409023Z" level=info msg="Started container" PID=1876 containerID=e5e473b88e00971249a66df1ea453e000ff57e1f0d6feb884f72eb2b5e064c10 description=kube-system/storage-provisioner/storage-provisioner id=c818f10a-363f-457d-9822-1aab277625b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6bd1001bb8a70bb4fc619a94343f07e816d8a8d84c8117c25a6a82114d4358bf
	Jul 01 14:40:11 ha-767646 crio[638]: time="2024-07-01 14:40:11.882070930Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.30.2" id=84db5d74-7599-493a-af24-f7f90c6f04e9 name=/runtime.v1.ImageService/ImageStatus
	Jul 01 14:40:11 ha-767646 crio[638]: time="2024-07-01 14:40:11.882333324Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567,RepoTags:[registry.k8s.io/kube-controller-manager:v1.30.2],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e registry.k8s.io/kube-controller-manager@sha256:8ddc81caccc97ada7e3c53ebe2c03240f25cd123c479752a1c314c402b972028],Size_:108229958,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=84db5d74-7599-493a-af24-f7f90c6f04e9 name=/runtime.v1.ImageService/ImageStatus
	Jul 01 14:40:11 ha-767646 crio[638]: time="2024-07-01 14:40:11.883169689Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.30.2" id=5a462e8b-20e8-4d34-8e12-0f616a7088db name=/runtime.v1.ImageService/ImageStatus
	Jul 01 14:40:11 ha-767646 crio[638]: time="2024-07-01 14:40:11.883376346Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567,RepoTags:[registry.k8s.io/kube-controller-manager:v1.30.2],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e registry.k8s.io/kube-controller-manager@sha256:8ddc81caccc97ada7e3c53ebe2c03240f25cd123c479752a1c314c402b972028],Size_:108229958,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=5a462e8b-20e8-4d34-8e12-0f616a7088db name=/runtime.v1.ImageService/ImageStatus
	Jul 01 14:40:11 ha-767646 crio[638]: time="2024-07-01 14:40:11.884701536Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-767646/kube-controller-manager" id=1b423b39-9a6b-4892-9157-8620adad7bc8 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 01 14:40:11 ha-767646 crio[638]: time="2024-07-01 14:40:11.884818149Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 01 14:40:11 ha-767646 crio[638]: time="2024-07-01 14:40:11.963172219Z" level=info msg="Created container a453d95ada89a92fe363744bfc95995f49f3f85c1034cc753f3f8f9bf0507a94: kube-system/kube-controller-manager-ha-767646/kube-controller-manager" id=1b423b39-9a6b-4892-9157-8620adad7bc8 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 01 14:40:11 ha-767646 crio[638]: time="2024-07-01 14:40:11.963845013Z" level=info msg="Starting container: a453d95ada89a92fe363744bfc95995f49f3f85c1034cc753f3f8f9bf0507a94" id=42e1ee8c-9c22-4a53-af95-4023074998e4 name=/runtime.v1.RuntimeService/StartContainer
	Jul 01 14:40:11 ha-767646 crio[638]: time="2024-07-01 14:40:11.970771941Z" level=info msg="Started container" PID=1916 containerID=a453d95ada89a92fe363744bfc95995f49f3f85c1034cc753f3f8f9bf0507a94 description=kube-system/kube-controller-manager-ha-767646/kube-controller-manager id=42e1ee8c-9c22-4a53-af95-4023074998e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=547fa00e3592cdd8fdde0769ef7e2ced4aa87d7187b0356c402ea8af15bbd58c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a453d95ada89a       e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567   25 seconds ago       Running             kube-controller-manager   8                   547fa00e3592c       kube-controller-manager-ha-767646
	e5e473b88e009       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   37 seconds ago       Running             storage-provisioner       5                   6bd1001bb8a70       storage-provisioner
	35ce2326ef64a       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   40 seconds ago       Running             kube-vip                  3                   116c3f652ec3b       kube-vip-ha-767646
	8298db67f51c3       84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0   44 seconds ago       Running             kube-apiserver            4                   b25292f6b124a       kube-apiserver-ha-767646
	314c30a2d788f       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   About a minute ago   Running             coredns                   2                   3935caa8fe01d       coredns-7db6d8ff4d-ggtnh
	24aab7627742b       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   About a minute ago   Running             busybox                   2                   32945cfc50d5a       busybox-fc5497c4f-8877b
	ee708ce147a35       2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93   About a minute ago   Running             coredns                   2                   df95ebc25d5a9       coredns-7db6d8ff4d-tv8kl
	701faf556352d       89d73d416b992e8f9602b67b4614d9e7f0655aebb3696e18efec695e0b654c40   About a minute ago   Running             kindnet-cni               2                   d392430f0d35f       kindnet-vp2jn
	9fb99d837b693       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Exited              storage-provisioner       4                   6bd1001bb8a70       storage-provisioner
	0732eeb813625       66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae   About a minute ago   Running             kube-proxy                2                   38be1d888fbfc       kube-proxy-6gt25
	3189f71a9c53a       e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567   About a minute ago   Exited              kube-controller-manager   7                   547fa00e3592c       kube-controller-manager-ha-767646
	66b2fa83742b7       84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0   About a minute ago   Exited              kube-apiserver            3                   b25292f6b124a       kube-apiserver-ha-767646
	30e08ecbad11e       c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5   About a minute ago   Running             kube-scheduler            2                   52bd318b9800c       kube-scheduler-ha-767646
	f8d3243d82cae       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   About a minute ago   Exited              kube-vip                  2                   116c3f652ec3b       kube-vip-ha-767646
	9552715e8faeb       014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd   About a minute ago   Running             etcd                      2                   4f3ab5ab2f86d       etcd-ha-767646
	
	
	==> coredns [314c30a2d788f1cc6acb1f1aa275e597c9b66738a98d6cbb3c16c1396bc8048d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34230 - 18001 "HINFO IN 4532180386845098945.3834204512945560962. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024691911s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1614552682]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 14:39:29.612) (total time: 30001ms):
	Trace[1614552682]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (14:39:59.613)
	Trace[1614552682]: [30.001709902s] [30.001709902s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1037354321]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 14:39:29.613) (total time: 30001ms):
	Trace[1037354321]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (14:39:59.613)
	Trace[1037354321]: [30.001169466s] [30.001169466s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1533945165]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 14:39:29.613) (total time: 30001ms):
	Trace[1533945165]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (14:39:59.613)
	Trace[1533945165]: [30.001022838s] [30.001022838s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [ee708ce147a3537c158f99d637b3dd91ede960df2f79ed0db2c0ec23e1f2baea] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41151 - 13789 "HINFO IN 3106464311532193736.1919167194809404271. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036545583s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1574225210]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 14:39:29.818) (total time: 30000ms):
	Trace[1574225210]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (14:39:59.818)
	Trace[1574225210]: [30.00093365s] [30.00093365s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[938573467]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 14:39:29.818) (total time: 30000ms):
	Trace[938573467]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (14:39:59.819)
	Trace[938573467]: [30.000445366s] [30.000445366s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[33627442]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (01-Jul-2024 14:39:29.818) (total time: 30000ms):
	Trace[33627442]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (14:39:59.819)
	Trace[33627442]: [30.000685139s] [30.000685139s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-767646
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-767646
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-767646
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_01T14_29_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 14:29:42 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767646
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 14:39:46 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 01 Jul 2024 14:39:16 +0000   Mon, 01 Jul 2024 14:40:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 01 Jul 2024 14:39:16 +0000   Mon, 01 Jul 2024 14:40:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 01 Jul 2024 14:39:16 +0000   Mon, 01 Jul 2024 14:40:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 01 Jul 2024 14:39:16 +0000   Mon, 01 Jul 2024 14:40:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-767646
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 ce698391ab79483199ba60830370f72f
	  System UUID:                bb79bcbd-8255-49eb-abfe-289f622b2130
	  Boot ID:                    030faa4f-44aa-434e-978f-182f6d212f48
	  Kernel Version:             5.15.0-1063-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8877b              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m40s
	  kube-system                 coredns-7db6d8ff4d-ggtnh             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     10m
	  kube-system                 coredns-7db6d8ff4d-tv8kl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     10m
	  kube-system                 etcd-ha-767646                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-vp2jn                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-767646             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-767646    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-6gt25                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-767646             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-767646                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             290Mi (3%!)(MISSING)  390Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 67s                    kube-proxy       
	  Normal  Starting                 4m46s                  kube-proxy       
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node ha-767646 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node ha-767646 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node ha-767646 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node ha-767646 event: Registered Node ha-767646 in Controller
	  Normal  NodeReady                10m                    kubelet          Node ha-767646 status is now: NodeReady
	  Normal  RegisteredNode           10m                    node-controller  Node ha-767646 event: Registered Node ha-767646 in Controller
	  Normal  RegisteredNode           9m                     node-controller  Node ha-767646 event: Registered Node ha-767646 in Controller
	  Normal  RegisteredNode           6m12s                  node-controller  Node ha-767646 event: Registered Node ha-767646 in Controller
	  Normal  NodeHasSufficientMemory  5m39s (x8 over 5m39s)  kubelet          Node ha-767646 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m39s (x8 over 5m39s)  kubelet          Node ha-767646 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m39s (x8 over 5m39s)  kubelet          Node ha-767646 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m39s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m51s                  node-controller  Node ha-767646 event: Registered Node ha-767646 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-767646 event: Registered Node ha-767646 in Controller
	  Normal  NodeNotReady             3m46s                  node-controller  Node ha-767646 status is now: NodeNotReady
	  Normal  RegisteredNode           3m24s                  node-controller  Node ha-767646 event: Registered Node ha-767646 in Controller
	  Normal  Starting                 119s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  119s (x8 over 119s)    kubelet          Node ha-767646 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s (x8 over 119s)    kubelet          Node ha-767646 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s (x8 over 119s)    kubelet          Node ha-767646 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           71s                    node-controller  Node ha-767646 event: Registered Node ha-767646 in Controller
	  Normal  RegisteredNode           12s                    node-controller  Node ha-767646 event: Registered Node ha-767646 in Controller
	  Normal  NodeNotReady             6s                     node-controller  Node ha-767646 status is now: NodeNotReady
	
	
	Name:               ha-767646-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-767646-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-767646
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_01T14_30_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 14:30:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767646-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 14:40:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 14:39:18 +0000   Mon, 01 Jul 2024 14:30:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 14:39:18 +0000   Mon, 01 Jul 2024 14:30:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 14:39:18 +0000   Mon, 01 Jul 2024 14:30:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 14:39:18 +0000   Mon, 01 Jul 2024 14:30:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-767646-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 9091c0f7c038406bbb431ae9bf4521f9
	  System UUID:                92694a9f-97e3-4975-82fc-2caf8612cff9
	  Boot ID:                    030faa4f-44aa-434e-978f-182f6d212f48
	  Kernel Version:             5.15.0-1063-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-zmcqt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m40s
	  kube-system                 etcd-ha-767646-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-7q2qb                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-767646-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-767646-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-s476n                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-767646-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-767646-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (1%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m28s                  kube-proxy       
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 72s                    kube-proxy       
	  Normal  Starting                 4m54s                  kube-proxy       
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)      kubelet          Node ha-767646-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-767646-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-767646-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           10m                    node-controller  Node ha-767646-m02 event: Registered Node ha-767646-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-767646-m02 event: Registered Node ha-767646-m02 in Controller
	  Normal  RegisteredNode           9m                     node-controller  Node ha-767646-m02 event: Registered Node ha-767646-m02 in Controller
	  Normal  Starting                 7m1s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     7m (x8 over 7m)        kubelet          Node ha-767646-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    7m (x8 over 7m)        kubelet          Node ha-767646-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m (x8 over 7m)        kubelet          Node ha-767646-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           6m12s                  node-controller  Node ha-767646-m02 event: Registered Node ha-767646-m02 in Controller
	  Normal  NodeHasSufficientPID     5m36s (x8 over 5m36s)  kubelet          Node ha-767646-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m36s (x8 over 5m36s)  kubelet          Node ha-767646-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m36s (x8 over 5m36s)  kubelet          Node ha-767646-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           4m51s                  node-controller  Node ha-767646-m02 event: Registered Node ha-767646-m02 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-767646-m02 event: Registered Node ha-767646-m02 in Controller
	  Normal  RegisteredNode           3m24s                  node-controller  Node ha-767646-m02 event: Registered Node ha-767646-m02 in Controller
	  Normal  Starting                 117s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s (x8 over 117s)    kubelet          Node ha-767646-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x8 over 117s)    kubelet          Node ha-767646-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x8 over 117s)    kubelet          Node ha-767646-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           71s                    node-controller  Node ha-767646-m02 event: Registered Node ha-767646-m02 in Controller
	  Normal  RegisteredNode           12s                    node-controller  Node ha-767646-m02 event: Registered Node ha-767646-m02 in Controller
	
	
	Name:               ha-767646-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-767646-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=ha-767646
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_01T14_32_22_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 14:32:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-767646-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 14:40:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 14:40:18 +0000   Mon, 01 Jul 2024 14:40:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 14:40:18 +0000   Mon, 01 Jul 2024 14:40:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 14:40:18 +0000   Mon, 01 Jul 2024 14:40:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 14:40:18 +0000   Mon, 01 Jul 2024 14:40:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-767646-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 4cb4a7c0b92c40978fab5687592ac84a
	  System UUID:                f31deaec-5094-43b6-b20a-e0f2e574054b
	  Boot ID:                    030faa4f-44aa-434e-978f-182f6d212f48
	  Kernel Version:             5.15.0-1063-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-kl4qg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	  kube-system                 kindnet-hcsth              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m16s
	  kube-system                 kube-proxy-dz99m           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m13s                  kube-proxy       
	  Normal  Starting                 13s                    kube-proxy       
	  Normal  Starting                 2m56s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m16s (x2 over 8m16s)  kubelet          Node ha-767646-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m16s (x2 over 8m16s)  kubelet          Node ha-767646-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  8m16s (x2 over 8m16s)  kubelet          Node ha-767646-m04 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           8m16s                  node-controller  Node ha-767646-m04 event: Registered Node ha-767646-m04 in Controller
	  Normal  RegisteredNode           8m12s                  node-controller  Node ha-767646-m04 event: Registered Node ha-767646-m04 in Controller
	  Normal  RegisteredNode           8m12s                  node-controller  Node ha-767646-m04 event: Registered Node ha-767646-m04 in Controller
	  Normal  NodeReady                7m43s                  kubelet          Node ha-767646-m04 status is now: NodeReady
	  Normal  RegisteredNode           6m13s                  node-controller  Node ha-767646-m04 event: Registered Node ha-767646-m04 in Controller
	  Normal  RegisteredNode           4m52s                  node-controller  Node ha-767646-m04 event: Registered Node ha-767646-m04 in Controller
	  Normal  NodeNotReady             4m12s                  node-controller  Node ha-767646-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-767646-m04 event: Registered Node ha-767646-m04 in Controller
	  Normal  RegisteredNode           3m25s                  node-controller  Node ha-767646-m04 event: Registered Node ha-767646-m04 in Controller
	  Normal  Starting                 3m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m10s (x8 over 3m22s)  kubelet          Node ha-767646-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m10s (x8 over 3m22s)  kubelet          Node ha-767646-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m10s (x8 over 3m22s)  kubelet          Node ha-767646-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           72s                    node-controller  Node ha-767646-m04 event: Registered Node ha-767646-m04 in Controller
	  Normal  Starting                 33s                    kubelet          Starting kubelet.
	  Normal  NodeNotReady             32s                    node-controller  Node ha-767646-m04 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  20s (x8 over 33s)      kubelet          Node ha-767646-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 33s)      kubelet          Node ha-767646-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x8 over 33s)      kubelet          Node ha-767646-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13s                    node-controller  Node ha-767646-m04 event: Registered Node ha-767646-m04 in Controller
	
	
	==> dmesg <==
	[  +0.001024] FS-Cache: O-key=[8] '7f903b0000000000'
	[  +0.000739] FS-Cache: N-cookie c=000001f2 [p=000001e9 fl=2 nc=0 na=1]
	[  +0.000919] FS-Cache: N-cookie d=00000000ac0c5ba0{9p.inode} n=000000002bcd9820
	[  +0.001021] FS-Cache: N-key=[8] '7f903b0000000000'
	[  +0.003039] FS-Cache: Duplicate cookie detected
	[  +0.000679] FS-Cache: O-cookie c=000001ec [p=000001e9 fl=226 nc=0 na=1]
	[  +0.000953] FS-Cache: O-cookie d=00000000ac0c5ba0{9p.inode} n=000000004cf8c411
	[  +0.001075] FS-Cache: O-key=[8] '7f903b0000000000'
	[  +0.000699] FS-Cache: N-cookie c=000001f3 [p=000001e9 fl=2 nc=0 na=1]
	[  +0.000920] FS-Cache: N-cookie d=00000000ac0c5ba0{9p.inode} n=000000007108af87
	[  +0.001036] FS-Cache: N-key=[8] '7f903b0000000000'
	[  +2.349943] FS-Cache: Duplicate cookie detected
	[  +0.000692] FS-Cache: O-cookie c=000001ea [p=000001e9 fl=226 nc=0 na=1]
	[  +0.000979] FS-Cache: O-cookie d=00000000ac0c5ba0{9p.inode} n=000000003e41755a
	[  +0.001031] FS-Cache: O-key=[8] '7e903b0000000000'
	[  +0.000727] FS-Cache: N-cookie c=000001f5 [p=000001e9 fl=2 nc=0 na=1]
	[  +0.000922] FS-Cache: N-cookie d=00000000ac0c5ba0{9p.inode} n=0000000049bce9d6
	[  +0.001027] FS-Cache: N-key=[8] '7e903b0000000000'
	[  +0.286123] FS-Cache: Duplicate cookie detected
	[  +0.000698] FS-Cache: O-cookie c=000001ef [p=000001e9 fl=226 nc=0 na=1]
	[  +0.000952] FS-Cache: O-cookie d=00000000ac0c5ba0{9p.inode} n=000000001ef70645
	[  +0.001038] FS-Cache: O-key=[8] '84903b0000000000'
	[  +0.000692] FS-Cache: N-cookie c=000001f6 [p=000001e9 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=00000000ac0c5ba0{9p.inode} n=000000002bcd9820
	[  +0.001037] FS-Cache: N-key=[8] '84903b0000000000'
	
	
	==> etcd [9552715e8faeb8e7529f865d6150fc5eb0ea7fcaeac06cea11be4db0c8f59a41] <==
	{"level":"warn","ts":"2024-07-01T14:39:06.383119Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.491816846s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-01T14:39:06.402393Z","caller":"traceutil/trace.go:171","msg":"trace[1741802489] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:2595; }","duration":"2.511084442s","start":"2024-07-01T14:39:03.891297Z","end":"2024-07-01T14:39:06.402381Z","steps":["trace[1741802489] 'agreement among raft nodes before linearized reading'  (duration: 2.491806131s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-01T14:39:06.402446Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-01T14:39:03.891288Z","time spent":"2.511148294s","remote":"127.0.0.1:33824","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":29,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-01T14:39:06.38348Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.492250245s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" limit:10000 ","response":"range_response_count:2 size:1960"}
	{"level":"info","ts":"2024-07-01T14:39:06.402651Z","caller":"traceutil/trace.go:171","msg":"trace[1010859441] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:2; response_revision:2595; }","duration":"2.511417105s","start":"2024-07-01T14:39:03.891222Z","end":"2024-07-01T14:39:06.40264Z","steps":["trace[1010859441] 'agreement among raft nodes before linearized reading'  (duration: 2.492210926s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-01T14:39:06.402706Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-01T14:39:03.891213Z","time spent":"2.511481196s","remote":"127.0.0.1:52090","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":2,"response size":1984,"request content":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-01T14:39:06.383927Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.49277588s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-01T14:39:06.402863Z","caller":"traceutil/trace.go:171","msg":"trace[1202790586] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:2595; }","duration":"2.511708776s","start":"2024-07-01T14:39:03.891146Z","end":"2024-07-01T14:39:06.402854Z","steps":["trace[1202790586] 'agreement among raft nodes before linearized reading'  (duration: 2.492756614s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-01T14:39:06.402921Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-01T14:39:03.891135Z","time spent":"2.51177317s","remote":"127.0.0.1:33636","response type":"/etcdserverpb.KV/Range","request count":0,"request size":69,"response count":0,"response size":29,"request content":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-01T14:39:06.384256Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.496939367s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" limit:10000 ","response":"range_response_count:30 size:153283"}
	{"level":"info","ts":"2024-07-01T14:39:06.4031Z","caller":"traceutil/trace.go:171","msg":"trace[1926281570] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; response_count:30; response_revision:2595; }","duration":"2.515784353s","start":"2024-07-01T14:39:03.887308Z","end":"2024-07-01T14:39:06.403093Z","steps":["trace[1926281570] 'agreement among raft nodes before linearized reading'  (duration: 2.496758926s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-01T14:39:06.403155Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-01T14:39:03.887298Z","time spent":"2.51584393s","remote":"127.0.0.1:52024","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":30,"response size":153307,"request content":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-01T14:39:06.384336Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.497053115s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" limit:10000 ","response":"range_response_count:12 size:7092"}
	{"level":"info","ts":"2024-07-01T14:39:06.403421Z","caller":"traceutil/trace.go:171","msg":"trace[893081022] range","detail":"{range_begin:/registry/roles/; range_end:/registry/roles0; response_count:12; response_revision:2595; }","duration":"2.516134329s","start":"2024-07-01T14:39:03.887278Z","end":"2024-07-01T14:39:06.403412Z","steps":["trace[893081022] 'agreement among raft nodes before linearized reading'  (duration: 2.496992676s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-01T14:39:06.403478Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-01T14:39:03.887269Z","time spent":"2.51619554s","remote":"127.0.0.1:33652","response type":"/etcdserverpb.KV/Range","request count":0,"request size":39,"response count":12,"response size":7116,"request content":"key:\"/registry/roles/\" range_end:\"/registry/roles0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-01T14:39:06.384684Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.497447819s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:10000 ","response":"range_response_count:4 size:2585"}
	{"level":"info","ts":"2024-07-01T14:39:06.409119Z","caller":"traceutil/trace.go:171","msg":"trace[2003995421] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:4; response_revision:2595; }","duration":"2.5218716s","start":"2024-07-01T14:39:03.88723Z","end":"2024-07-01T14:39:06.409102Z","steps":["trace[2003995421] 'agreement among raft nodes before linearized reading'  (duration: 2.497396692s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-01T14:39:06.409198Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-01T14:39:03.88722Z","time spent":"2.521955695s","remote":"127.0.0.1:51914","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":4,"response size":2609,"request content":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:10000 "}
	{"level":"info","ts":"2024-07-01T14:39:06.382838Z","caller":"traceutil/trace.go:171","msg":"trace[1377301138] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:13; response_revision:2595; }","duration":"2.49869049s","start":"2024-07-01T14:39:03.884141Z","end":"2024-07-01T14:39:06.382832Z","steps":["trace[1377301138] 'agreement among raft nodes before linearized reading'  (duration: 2.498465897s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-01T14:39:06.409927Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-01T14:39:03.88413Z","time spent":"2.525781981s","remote":"127.0.0.1:33766","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":13,"response size":14421,"request content":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-01T14:39:06.410081Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-01T14:39:04.215904Z","time spent":"2.194164341s","remote":"127.0.0.1:51914","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":4,"response size":2609,"request content":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:500 "}
	{"level":"info","ts":"2024-07-01T14:39:06.410229Z","caller":"traceutil/trace.go:171","msg":"trace[335246158] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:2595; }","duration":"2.5188702s","start":"2024-07-01T14:39:03.89135Z","end":"2024-07-01T14:39:06.41022Z","steps":["trace[335246158] 'agreement among raft nodes before linearized reading'  (duration: 2.491456761s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-01T14:39:06.410288Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-01T14:39:03.891338Z","time spent":"2.518939165s","remote":"127.0.0.1:33598","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":29,"request content":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" limit:10000 "}
	{"level":"warn","ts":"2024-07-01T14:39:06.4651Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"98a466d4dee95a76","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-01T14:39:06.465222Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"98a466d4dee95a76","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	
	
	==> kernel <==
	 14:40:38 up 1 day, 22:23,  0 users,  load average: 1.27, 1.90, 1.91
	Linux ha-767646 5.15.0-1063-aws #69~20.04.1-Ubuntu SMP Fri May 10 19:21:30 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [701faf556352d21af4c790ec2c688233932b36cc00caeb4864e42a8398c6fd79] <==
	I0701 14:39:59.881423       1 main.go:227] handling current node
	I0701 14:39:59.885258       1 main.go:223] Handling node with IPs: map[192.168.49.3:{}]
	I0701 14:39:59.885297       1 main.go:250] Node ha-767646-m02 has CIDR [10.244.1.0/24] 
	I0701 14:39:59.885475       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0} 
	I0701 14:39:59.885568       1 main.go:223] Handling node with IPs: map[192.168.49.5:{}]
	I0701 14:39:59.885582       1 main.go:250] Node ha-767646-m04 has CIDR [10.244.3.0/24] 
	I0701 14:39:59.885666       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0} 
	I0701 14:40:09.898525       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:40:09.898554       1 main.go:227] handling current node
	I0701 14:40:09.898566       1 main.go:223] Handling node with IPs: map[192.168.49.3:{}]
	I0701 14:40:09.898571       1 main.go:250] Node ha-767646-m02 has CIDR [10.244.1.0/24] 
	I0701 14:40:09.898663       1 main.go:223] Handling node with IPs: map[192.168.49.5:{}]
	I0701 14:40:09.898676       1 main.go:250] Node ha-767646-m04 has CIDR [10.244.3.0/24] 
	I0701 14:40:19.911547       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:40:19.911574       1 main.go:227] handling current node
	I0701 14:40:19.911594       1 main.go:223] Handling node with IPs: map[192.168.49.3:{}]
	I0701 14:40:19.911599       1 main.go:250] Node ha-767646-m02 has CIDR [10.244.1.0/24] 
	I0701 14:40:19.911696       1 main.go:223] Handling node with IPs: map[192.168.49.5:{}]
	I0701 14:40:19.911709       1 main.go:250] Node ha-767646-m04 has CIDR [10.244.3.0/24] 
	I0701 14:40:29.919905       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0701 14:40:29.919935       1 main.go:227] handling current node
	I0701 14:40:29.919947       1 main.go:223] Handling node with IPs: map[192.168.49.3:{}]
	I0701 14:40:29.919953       1 main.go:250] Node ha-767646-m02 has CIDR [10.244.1.0/24] 
	I0701 14:40:29.920047       1 main.go:223] Handling node with IPs: map[192.168.49.5:{}]
	I0701 14:40:29.920062       1 main.go:250] Node ha-767646-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [66b2fa83742b701443db33e12f96421e7f97660bb9f35c96e9ebc6ab3399e96d] <==
	Trace[1576332384]: [2.762146122s] [2.762146122s] END
	I0701 14:39:06.475765       1 trace.go:236] Trace[1504819857]: "List(recursive=true) etcd3" audit-id:,key:/pods,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (01-Jul-2024 14:39:03.885) (total time: 2590ms):
	Trace[1504819857]: [2.590677775s] [2.590677775s] END
	I0701 14:39:06.476642       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0701 14:39:06.493139       1 shared_informer.go:320] Caches are synced for configmaps
	I0701 14:39:06.493821       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0701 14:39:06.493840       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0701 14:39:06.494482       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0701 14:39:06.495106       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0701 14:39:06.504094       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0701 14:39:06.504163       1 aggregator.go:165] initial CRD sync complete...
	I0701 14:39:06.504172       1 autoregister_controller.go:141] Starting autoregister controller
	I0701 14:39:06.504178       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0701 14:39:06.504183       1 cache.go:39] Caches are synced for autoregister controller
	I0701 14:39:06.506651       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0701 14:39:06.507931       1 cache.go:39] Caches are synced for AvailableConditionController controller
	W0701 14:39:06.520721       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I0701 14:39:06.545250       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0701 14:39:06.546035       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0701 14:39:06.546056       1 policy_source.go:224] refreshing policies
	I0701 14:39:06.559085       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0701 14:39:06.625469       1 controller.go:615] quota admission added evaluator for: endpoints
	I0701 14:39:06.652150       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0701 14:39:06.662370       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	F0701 14:39:51.892870       1 hooks.go:203] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-apiserver [8298db67f51c35c99f6e65e8ef4b5d093a95b5e737244a096d1dd53b794ac2e6] <==
	I0701 14:39:55.726607       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0701 14:39:55.729285       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0701 14:39:55.729385       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0701 14:39:55.871502       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0701 14:39:55.878522       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0701 14:39:56.015706       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0701 14:39:56.015840       1 policy_source.go:224] refreshing policies
	I0701 14:39:56.019828       1 shared_informer.go:320] Caches are synced for configmaps
	I0701 14:39:56.026922       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0701 14:39:56.028394       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0701 14:39:56.028657       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0701 14:39:56.028681       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0701 14:39:56.029476       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0701 14:39:56.029587       1 aggregator.go:165] initial CRD sync complete...
	I0701 14:39:56.029594       1 autoregister_controller.go:141] Starting autoregister controller
	I0701 14:39:56.029600       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0701 14:39:56.029605       1 cache.go:39] Caches are synced for autoregister controller
	I0701 14:39:56.035132       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0701 14:39:56.035242       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0701 14:39:56.048207       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0701 14:39:56.112841       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0701 14:39:56.728075       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0701 14:39:57.182726       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0701 14:39:57.184669       1 controller.go:615] quota admission added evaluator for: endpoints
	I0701 14:39:57.197727       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3189f71a9c53a79c63d299a82147ad23261d5bfa2878db54997bf6d4fb7ba4f0] <==
	I0701 14:39:30.359559       1 serving.go:380] Generated self-signed cert in-memory
	I0701 14:39:30.991316       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0701 14:39:30.991350       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 14:39:30.993948       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0701 14:39:30.995106       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0701 14:39:30.995244       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0701 14:39:30.995319       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0701 14:39:41.014923       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-namespaces-controller
ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [a453d95ada89a92fe363744bfc95995f49f3f85c1034cc753f3f8f9bf0507a94] <==
	I0701 14:40:25.773054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.328µs"
	I0701 14:40:25.773134       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="128.568µs"
	I0701 14:40:25.775306       1 shared_informer.go:320] Caches are synced for PVC protection
	I0701 14:40:25.776528       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0701 14:40:25.776555       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0701 14:40:25.776567       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0701 14:40:25.776578       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0701 14:40:25.821571       1 shared_informer.go:320] Caches are synced for disruption
	I0701 14:40:25.867196       1 shared_informer.go:320] Caches are synced for daemon sets
	I0701 14:40:25.876366       1 shared_informer.go:320] Caches are synced for taint
	I0701 14:40:25.876572       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0701 14:40:25.877247       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0701 14:40:25.879449       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767646-m02"
	I0701 14:40:25.879549       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767646-m04"
	I0701 14:40:25.879614       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-767646"
	I0701 14:40:25.879661       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0701 14:40:25.909178       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0701 14:40:25.953512       1 shared_informer.go:320] Caches are synced for resource quota
	I0701 14:40:25.963693       1 shared_informer.go:320] Caches are synced for resource quota
	I0701 14:40:26.378511       1 shared_informer.go:320] Caches are synced for garbage collector
	I0701 14:40:26.378542       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0701 14:40:26.387805       1 shared_informer.go:320] Caches are synced for garbage collector
	I0701 14:40:31.229032       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-767646-m04"
	I0701 14:40:31.326148       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.366835ms"
	I0701 14:40:31.326355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.476µs"
	
	
	==> kube-proxy [0732eeb813625e41a6b80a8939d0c21f534ab6388dc9143e7b432509b93a1316] <==
	I0701 14:39:29.811126       1 server_linux.go:69] "Using iptables proxy"
	I0701 14:39:29.832119       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0701 14:39:29.970770       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0701 14:39:29.970906       1 server_linux.go:165] "Using iptables Proxier"
	I0701 14:39:29.976069       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0701 14:39:29.976165       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0701 14:39:29.976212       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0701 14:39:29.976461       1 server.go:872] "Version info" version="v1.30.2"
	I0701 14:39:29.976669       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0701 14:39:29.977744       1 config.go:192] "Starting service config controller"
	I0701 14:39:29.977810       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0701 14:39:29.977863       1 config.go:101] "Starting endpoint slice config controller"
	I0701 14:39:29.977891       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0701 14:39:29.978466       1 config.go:319] "Starting node config controller"
	I0701 14:39:29.980325       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0701 14:39:30.086173       1 shared_informer.go:320] Caches are synced for service config
	I0701 14:39:30.086212       1 shared_informer.go:320] Caches are synced for node config
	I0701 14:39:30.086243       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [30e08ecbad11e6a6aa7b75665c79e67915b9d7b68a822930c56be34bfb146c9a] <==
	W0701 14:38:59.639848       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0701 14:38:59.639888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0701 14:39:00.040473       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 14:39:00.040612       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0701 14:39:00.235737       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0701 14:39:00.235781       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0701 14:39:00.374882       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 14:39:00.374925       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0701 14:39:00.547615       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0701 14:39:00.547654       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0701 14:39:01.201147       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0701 14:39:01.201190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0701 14:39:01.284653       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0701 14:39:01.284693       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0701 14:39:01.367519       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0701 14:39:01.367559       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0701 14:39:01.390106       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0701 14:39:01.390144       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0701 14:39:01.636051       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0701 14:39:01.636094       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0701 14:39:01.912502       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0701 14:39:01.912543       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0701 14:39:03.082568       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0701 14:39:03.082616       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0701 14:39:10.435395       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 01 14:39:42 ha-767646 kubelet[753]: E0701 14:39:42.043203     753 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-767646_kube-system(de892b1eda1ff2fa0ed2da4b6ec546f9)\"" pod="kube-system/kube-controller-manager-ha-767646" podUID="de892b1eda1ff2fa0ed2da4b6ec546f9"
	Jul 01 14:39:46 ha-767646 kubelet[753]: I0701 14:39:46.844687     753 scope.go:117] "RemoveContainer" containerID="3189f71a9c53a79c63d299a82147ad23261d5bfa2878db54997bf6d4fb7ba4f0"
	Jul 01 14:39:46 ha-767646 kubelet[753]: E0701 14:39:46.845665     753 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-767646_kube-system(de892b1eda1ff2fa0ed2da4b6ec546f9)\"" pod="kube-system/kube-controller-manager-ha-767646" podUID="de892b1eda1ff2fa0ed2da4b6ec546f9"
	Jul 01 14:39:47 ha-767646 kubelet[753]: I0701 14:39:47.429831     753 scope.go:117] "RemoveContainer" containerID="3189f71a9c53a79c63d299a82147ad23261d5bfa2878db54997bf6d4fb7ba4f0"
	Jul 01 14:39:47 ha-767646 kubelet[753]: E0701 14:39:47.430331     753 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-767646_kube-system(de892b1eda1ff2fa0ed2da4b6ec546f9)\"" pod="kube-system/kube-controller-manager-ha-767646" podUID="de892b1eda1ff2fa0ed2da4b6ec546f9"
	Jul 01 14:39:53 ha-767646 kubelet[753]: I0701 14:39:53.069475     753 scope.go:117] "RemoveContainer" containerID="66b2fa83742b701443db33e12f96421e7f97660bb9f35c96e9ebc6ab3399e96d"
	Jul 01 14:39:53 ha-767646 kubelet[753]: I0701 14:39:53.070555     753 status_manager.go:853] "Failed to get status for pod" podUID="c7f3d51efd52206961993b119ff656c5" pod="kube-system/kube-apiserver-ha-767646" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-767646\": dial tcp 192.168.49.254:8443: connect: connection refused"
	Jul 01 14:39:53 ha-767646 kubelet[753]: E0701 14:39:53.071742     753 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-767646.17de1daa67155505\": dial tcp 192.168.49.254:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-ha-767646.17de1daa67155505  kube-system   2752 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-767646,UID:c7f3d51efd52206961993b119ff656c5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.30.2\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-767646,},FirstTimestamp:2024-07-01 14:38:45 +0000 UTC,LastTimestamp:2024-07-01 14:39:53.070994442 +0000 UTC m=+74.395572692,Count:2,Type:Normal,EventTime:0001-01-01 00:00
:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-767646,}"
	Jul 01 14:39:55 ha-767646 kubelet[753]: E0701 14:39:55.923026     753 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Jul 01 14:39:55 ha-767646 kubelet[753]: E0701 14:39:55.923086     753 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Jul 01 14:39:55 ha-767646 kubelet[753]: E0701 14:39:55.923121     753 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Jul 01 14:39:55 ha-767646 kubelet[753]: E0701 14:39:55.923788     753 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Jul 01 14:39:57 ha-767646 kubelet[753]: I0701 14:39:57.081004     753 scope.go:117] "RemoveContainer" containerID="f8d3243d82cae4240cd10a5cd7bf88fba4a979f4d9607976d55bbe3db7aab9de"
	Jul 01 14:39:58 ha-767646 kubelet[753]: I0701 14:39:58.882258     753 scope.go:117] "RemoveContainer" containerID="3189f71a9c53a79c63d299a82147ad23261d5bfa2878db54997bf6d4fb7ba4f0"
	Jul 01 14:39:58 ha-767646 kubelet[753]: E0701 14:39:58.882727     753 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-767646_kube-system(de892b1eda1ff2fa0ed2da4b6ec546f9)\"" pod="kube-system/kube-controller-manager-ha-767646" podUID="de892b1eda1ff2fa0ed2da4b6ec546f9"
	Jul 01 14:40:00 ha-767646 kubelet[753]: I0701 14:40:00.127566     753 scope.go:117] "RemoveContainer" containerID="9fb99d837b69399b0b3d874d91525df672eface5680269f93ccd5d80d82362bf"
	Jul 01 14:40:07 ha-767646 kubelet[753]: E0701 14:40:07.091410     753 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-767646?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 01 14:40:07 ha-767646 kubelet[753]: E0701 14:40:07.517569     753 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-767646\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-767646?resourceVersion=0&timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 01 14:40:11 ha-767646 kubelet[753]: I0701 14:40:11.881251     753 scope.go:117] "RemoveContainer" containerID="3189f71a9c53a79c63d299a82147ad23261d5bfa2878db54997bf6d4fb7ba4f0"
	Jul 01 14:40:17 ha-767646 kubelet[753]: E0701 14:40:17.092701     753 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-767646?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 01 14:40:17 ha-767646 kubelet[753]: E0701 14:40:17.518237     753 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-767646\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-767646?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 01 14:40:27 ha-767646 kubelet[753]: E0701 14:40:27.093407     753 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-767646?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 01 14:40:27 ha-767646 kubelet[753]: E0701 14:40:27.518532     753 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-767646\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-767646?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 01 14:40:37 ha-767646 kubelet[753]: E0701 14:40:37.093910     753 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-767646?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Jul 01 14:40:37 ha-767646 kubelet[753]: E0701 14:40:37.519045     753 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-767646\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-767646?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-767646 -n ha-767646
helpers_test.go:261: (dbg) Run:  kubectl --context ha-767646 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (128.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (379.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-474598 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0701 15:09:02.765115 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-474598 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 102 (6m15.773194692s)

                                                
                                                
-- stdout --
	* [old-k8s-version-474598] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19166-3708336/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-3708336/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-474598" primary control-plane node in "old-k8s-version-474598" cluster
	* Pulling base image v0.0.44-1719413016-19142 ...
	* Restarting existing docker container for "old-k8s-version-474598" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-474598 addons enable metrics-server
	
	* Enabled addons: default-storageclass, dashboard, storage-provisioner, metrics-server
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 15:09:01.677757 3906202 out.go:291] Setting OutFile to fd 1 ...
	I0701 15:09:01.677906 3906202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 15:09:01.677913 3906202 out.go:304] Setting ErrFile to fd 2...
	I0701 15:09:01.677917 3906202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 15:09:01.678177 3906202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-3708336/.minikube/bin
	I0701 15:09:01.678567 3906202 out.go:298] Setting JSON to false
	I0701 15:09:01.679548 3906202 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":168693,"bootTime":1719677849,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0701 15:09:01.679618 3906202 start.go:139] virtualization:  
	I0701 15:09:01.683283 3906202 out.go:177] * [old-k8s-version-474598] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0701 15:09:01.685715 3906202 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 15:09:01.685758 3906202 notify.go:220] Checking for updates...
	I0701 15:09:01.688384 3906202 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 15:09:01.690386 3906202 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19166-3708336/kubeconfig
	I0701 15:09:01.692391 3906202 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-3708336/.minikube
	I0701 15:09:01.694233 3906202 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0701 15:09:01.696705 3906202 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 15:09:01.699344 3906202 config.go:182] Loaded profile config "old-k8s-version-474598": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0701 15:09:01.701424 3906202 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0701 15:09:01.703274 3906202 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 15:09:01.731484 3906202 docker.go:122] docker version: linux-27.0.3:Docker Engine - Community
	I0701 15:09:01.731612 3906202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 15:09:01.853936 3906202 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:66 SystemTime:2024-07-01 15:09:01.843721214 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0701 15:09:01.854048 3906202 docker.go:295] overlay module found
	I0701 15:09:01.856484 3906202 out.go:177] * Using the docker driver based on existing profile
	I0701 15:09:01.859114 3906202 start.go:297] selected driver: docker
	I0701 15:09:01.859140 3906202 start.go:901] validating driver "docker" against &{Name:old-k8s-version-474598 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-474598 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 15:09:01.859284 3906202 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 15:09:01.859897 3906202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 15:09:01.953856 3906202 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:66 SystemTime:2024-07-01 15:09:01.935697305 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0701 15:09:01.954205 3906202 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 15:09:01.954242 3906202 cni.go:84] Creating CNI manager for ""
	I0701 15:09:01.954251 3906202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0701 15:09:01.954297 3906202 start.go:340] cluster config:
	{Name:old-k8s-version-474598 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-474598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 15:09:01.957782 3906202 out.go:177] * Starting "old-k8s-version-474598" primary control-plane node in "old-k8s-version-474598" cluster
	I0701 15:09:01.959945 3906202 cache.go:121] Beginning downloading kic base image for docker with crio
	I0701 15:09:01.962558 3906202 out.go:177] * Pulling base image v0.0.44-1719413016-19142 ...
	I0701 15:09:01.965126 3906202 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0701 15:09:01.965144 3906202 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local docker daemon
	I0701 15:09:01.965186 3906202 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0701 15:09:01.965196 3906202 cache.go:56] Caching tarball of preloaded images
	I0701 15:09:01.965272 3906202 preload.go:173] Found /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0701 15:09:01.965281 3906202 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0701 15:09:01.965401 3906202 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/config.json ...
	I0701 15:09:02.003468 3906202 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local docker daemon, skipping pull
	I0701 15:09:02.003504 3906202 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d exists in daemon, skipping load
	I0701 15:09:02.003528 3906202 cache.go:194] Successfully downloaded all kic artifacts
	I0701 15:09:02.003571 3906202 start.go:360] acquireMachinesLock for old-k8s-version-474598: {Name:mk291aa16770196f372f19ad91bae726dd814e84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 15:09:02.003661 3906202 start.go:364] duration metric: took 54.187µs to acquireMachinesLock for "old-k8s-version-474598"
	I0701 15:09:02.003702 3906202 start.go:96] Skipping create...Using existing machine configuration
	I0701 15:09:02.003712 3906202 fix.go:54] fixHost starting: 
	I0701 15:09:02.004044 3906202 cli_runner.go:164] Run: docker container inspect old-k8s-version-474598 --format={{.State.Status}}
	I0701 15:09:02.029861 3906202 fix.go:112] recreateIfNeeded on old-k8s-version-474598: state=Stopped err=<nil>
	W0701 15:09:02.029898 3906202 fix.go:138] unexpected machine state, will restart: <nil>
	I0701 15:09:02.032925 3906202 out.go:177] * Restarting existing docker container for "old-k8s-version-474598" ...
	I0701 15:09:02.044334 3906202 cli_runner.go:164] Run: docker start old-k8s-version-474598
	I0701 15:09:02.411369 3906202 cli_runner.go:164] Run: docker container inspect old-k8s-version-474598 --format={{.State.Status}}
	I0701 15:09:02.434530 3906202 kic.go:430] container "old-k8s-version-474598" state is running.
	I0701 15:09:02.435363 3906202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-474598
	I0701 15:09:02.466693 3906202 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/config.json ...
	I0701 15:09:02.466930 3906202 machine.go:94] provisionDockerMachine start ...
	I0701 15:09:02.466994 3906202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-474598
	I0701 15:09:02.492552 3906202 main.go:141] libmachine: Using SSH client type: native
	I0701 15:09:02.492846 3906202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 34190 <nil> <nil>}
	I0701 15:09:02.492859 3906202 main.go:141] libmachine: About to run SSH command:
	hostname
	I0701 15:09:02.493487 3906202 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0701 15:09:05.652655 3906202 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-474598
	
	I0701 15:09:05.652682 3906202 ubuntu.go:169] provisioning hostname "old-k8s-version-474598"
	I0701 15:09:05.652758 3906202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-474598
	I0701 15:09:05.671338 3906202 main.go:141] libmachine: Using SSH client type: native
	I0701 15:09:05.671619 3906202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 34190 <nil> <nil>}
	I0701 15:09:05.671640 3906202 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-474598 && echo "old-k8s-version-474598" | sudo tee /etc/hostname
	I0701 15:09:05.841639 3906202 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-474598
	
	I0701 15:09:05.841816 3906202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-474598
	I0701 15:09:05.868249 3906202 main.go:141] libmachine: Using SSH client type: native
	I0701 15:09:05.868548 3906202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 34190 <nil> <nil>}
	I0701 15:09:05.868566 3906202 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-474598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-474598/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-474598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0701 15:09:06.018002 3906202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0701 15:09:06.018041 3906202 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19166-3708336/.minikube CaCertPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19166-3708336/.minikube}
	I0701 15:09:06.018062 3906202 ubuntu.go:177] setting up certificates
	I0701 15:09:06.018071 3906202 provision.go:84] configureAuth start
	I0701 15:09:06.018136 3906202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-474598
	I0701 15:09:06.040513 3906202 provision.go:143] copyHostCerts
	I0701 15:09:06.040592 3906202 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.pem, removing ...
	I0701 15:09:06.040601 3906202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.pem
	I0701 15:09:06.040671 3906202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.pem (1082 bytes)
	I0701 15:09:06.040760 3906202 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-3708336/.minikube/cert.pem, removing ...
	I0701 15:09:06.040765 3906202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-3708336/.minikube/cert.pem
	I0701 15:09:06.040803 3906202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19166-3708336/.minikube/cert.pem (1123 bytes)
	I0701 15:09:06.040859 3906202 exec_runner.go:144] found /home/jenkins/minikube-integration/19166-3708336/.minikube/key.pem, removing ...
	I0701 15:09:06.040864 3906202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19166-3708336/.minikube/key.pem
	I0701 15:09:06.040888 3906202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19166-3708336/.minikube/key.pem (1675 bytes)
	I0701 15:09:06.040935 3906202 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-474598 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-474598]
	I0701 15:09:06.626479 3906202 provision.go:177] copyRemoteCerts
	I0701 15:09:06.626606 3906202 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0701 15:09:06.626668 3906202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-474598
	I0701 15:09:06.658766 3906202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/old-k8s-version-474598/id_rsa Username:docker}
	I0701 15:09:06.767310 3906202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0701 15:09:06.804266 3906202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0701 15:09:06.840371 3906202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0701 15:09:06.875602 3906202 provision.go:87] duration metric: took 857.517761ms to configureAuth
	I0701 15:09:06.875708 3906202 ubuntu.go:193] setting minikube options for container-runtime
	I0701 15:09:06.876055 3906202 config.go:182] Loaded profile config "old-k8s-version-474598": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0701 15:09:06.876715 3906202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-474598
	I0701 15:09:06.906927 3906202 main.go:141] libmachine: Using SSH client type: native
	I0701 15:09:06.907205 3906202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2ba0] 0x3e5400 <nil>  [] 0s} 127.0.0.1 34190 <nil> <nil>}
	I0701 15:09:06.907223 3906202 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0701 15:09:07.338248 3906202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0701 15:09:07.338269 3906202 machine.go:97] duration metric: took 4.871329343s to provisionDockerMachine
	I0701 15:09:07.338281 3906202 start.go:293] postStartSetup for "old-k8s-version-474598" (driver="docker")
	I0701 15:09:07.338292 3906202 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0701 15:09:07.338374 3906202 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0701 15:09:07.338414 3906202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-474598
	I0701 15:09:07.370406 3906202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/old-k8s-version-474598/id_rsa Username:docker}
	I0701 15:09:07.494418 3906202 ssh_runner.go:195] Run: cat /etc/os-release
	I0701 15:09:07.497714 3906202 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0701 15:09:07.497747 3906202 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0701 15:09:07.497767 3906202 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0701 15:09:07.497774 3906202 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0701 15:09:07.497784 3906202 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-3708336/.minikube/addons for local assets ...
	I0701 15:09:07.497840 3906202 filesync.go:126] Scanning /home/jenkins/minikube-integration/19166-3708336/.minikube/files for local assets ...
	I0701 15:09:07.497931 3906202 filesync.go:149] local asset: /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem -> 37137252.pem in /etc/ssl/certs
	I0701 15:09:07.498036 3906202 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0701 15:09:07.511732 3906202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem --> /etc/ssl/certs/37137252.pem (1708 bytes)
	I0701 15:09:07.545999 3906202 start.go:296] duration metric: took 207.703515ms for postStartSetup
	I0701 15:09:07.546083 3906202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 15:09:07.546136 3906202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-474598
	I0701 15:09:07.570223 3906202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/old-k8s-version-474598/id_rsa Username:docker}
	I0701 15:09:07.671197 3906202 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0701 15:09:07.675975 3906202 fix.go:56] duration metric: took 5.672256685s for fixHost
	I0701 15:09:07.675997 3906202 start.go:83] releasing machines lock for "old-k8s-version-474598", held for 5.67232076s
	I0701 15:09:07.676070 3906202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-474598
	I0701 15:09:07.714635 3906202 ssh_runner.go:195] Run: cat /version.json
	I0701 15:09:07.714683 3906202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-474598
	I0701 15:09:07.714907 3906202 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0701 15:09:07.714967 3906202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-474598
	I0701 15:09:07.732100 3906202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/old-k8s-version-474598/id_rsa Username:docker}
	I0701 15:09:07.745467 3906202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/old-k8s-version-474598/id_rsa Username:docker}
	I0701 15:09:07.852582 3906202 ssh_runner.go:195] Run: systemctl --version
	I0701 15:09:08.003165 3906202 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0701 15:09:08.175804 3906202 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0701 15:09:08.187783 3906202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 15:09:08.209641 3906202 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0701 15:09:08.209805 3906202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0701 15:09:08.222839 3906202 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0701 15:09:08.222924 3906202 start.go:494] detecting cgroup driver to use...
	I0701 15:09:08.223018 3906202 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0701 15:09:08.223117 3906202 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0701 15:09:08.244481 3906202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0701 15:09:08.263953 3906202 docker.go:217] disabling cri-docker service (if available) ...
	I0701 15:09:08.264110 3906202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0701 15:09:08.285473 3906202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0701 15:09:08.304888 3906202 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0701 15:09:08.436734 3906202 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0701 15:09:08.588100 3906202 docker.go:233] disabling docker service ...
	I0701 15:09:08.588273 3906202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0701 15:09:08.611343 3906202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0701 15:09:08.629722 3906202 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0701 15:09:08.775984 3906202 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0701 15:09:08.904266 3906202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0701 15:09:08.917286 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0701 15:09:08.934559 3906202 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0701 15:09:08.934628 3906202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 15:09:08.944614 3906202 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0701 15:09:08.944683 3906202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 15:09:08.954932 3906202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 15:09:08.964941 3906202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0701 15:09:08.976570 3906202 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0701 15:09:08.986263 3906202 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0701 15:09:08.996264 3906202 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0701 15:09:09.007557 3906202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 15:09:09.118586 3906202 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0701 15:09:10.139665 3906202 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.021043865s)
	I0701 15:09:10.139694 3906202 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0701 15:09:10.139747 3906202 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0701 15:09:10.144710 3906202 start.go:562] Will wait 60s for crictl version
	I0701 15:09:10.144776 3906202 ssh_runner.go:195] Run: which crictl
	I0701 15:09:10.148951 3906202 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0701 15:09:10.238889 3906202 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0701 15:09:10.238972 3906202 ssh_runner.go:195] Run: crio --version
	I0701 15:09:10.296396 3906202 ssh_runner.go:195] Run: crio --version
	I0701 15:09:10.355685 3906202 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0701 15:09:10.357601 3906202 cli_runner.go:164] Run: docker network inspect old-k8s-version-474598 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 15:09:10.383234 3906202 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0701 15:09:10.387801 3906202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 15:09:10.402832 3906202 kubeadm.go:877] updating cluster {Name:old-k8s-version-474598 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-474598 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0701 15:09:10.402950 3906202 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0701 15:09:10.403004 3906202 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 15:09:10.484922 3906202 crio.go:514] all images are preloaded for cri-o runtime.
	I0701 15:09:10.484942 3906202 crio.go:433] Images already preloaded, skipping extraction
	I0701 15:09:10.484997 3906202 ssh_runner.go:195] Run: sudo crictl images --output json
	I0701 15:09:10.569414 3906202 crio.go:514] all images are preloaded for cri-o runtime.
	I0701 15:09:10.569434 3906202 cache_images.go:84] Images are preloaded, skipping loading
	I0701 15:09:10.569441 3906202 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.20.0 crio true true} ...
	I0701 15:09:10.569562 3906202 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-474598 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-474598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0701 15:09:10.569640 3906202 ssh_runner.go:195] Run: crio config
	I0701 15:09:10.717297 3906202 cni.go:84] Creating CNI manager for ""
	I0701 15:09:10.717358 3906202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0701 15:09:10.717382 3906202 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0701 15:09:10.717417 3906202 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-474598 NodeName:old-k8s-version-474598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0701 15:09:10.717615 3906202 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-474598"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0701 15:09:10.717701 3906202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0701 15:09:10.727063 3906202 binaries.go:44] Found k8s binaries, skipping transfer
	I0701 15:09:10.727198 3906202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0701 15:09:10.735755 3906202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I0701 15:09:10.753593 3906202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0701 15:09:10.771959 3906202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0701 15:09:10.789841 3906202 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0701 15:09:10.793496 3906202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0701 15:09:10.804019 3906202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 15:09:10.939462 3906202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 15:09:10.967434 3906202 certs.go:68] Setting up /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598 for IP: 192.168.76.2
	I0701 15:09:10.967451 3906202 certs.go:194] generating shared ca certs ...
	I0701 15:09:10.967468 3906202 certs.go:226] acquiring lock for ca certs: {Name:mkef61a10d340f62d4856e4c226678a7bd970ee7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 15:09:10.967608 3906202 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.key
	I0701 15:09:10.967650 3906202 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.key
	I0701 15:09:10.967657 3906202 certs.go:256] generating profile certs ...
	I0701 15:09:10.967739 3906202 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/client.key
	I0701 15:09:10.967798 3906202 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/apiserver.key.5190c0f1
	I0701 15:09:10.967839 3906202 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/proxy-client.key
	I0701 15:09:10.967975 3906202 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/3713725.pem (1338 bytes)
	W0701 15:09:10.968002 3906202 certs.go:480] ignoring /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/3713725_empty.pem, impossibly tiny 0 bytes
	I0701 15:09:10.968010 3906202 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca-key.pem (1675 bytes)
	I0701 15:09:10.968033 3906202 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem (1082 bytes)
	I0701 15:09:10.968058 3906202 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem (1123 bytes)
	I0701 15:09:10.968080 3906202 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/key.pem (1675 bytes)
	I0701 15:09:10.968121 3906202 certs.go:484] found cert: /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem (1708 bytes)
	I0701 15:09:10.968725 3906202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0701 15:09:11.023705 3906202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0701 15:09:11.145532 3906202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0701 15:09:11.234858 3906202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0701 15:09:11.297229 3906202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0701 15:09:11.358416 3906202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0701 15:09:11.408658 3906202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0701 15:09:11.448986 3906202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0701 15:09:11.474510 3906202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/ssl/certs/37137252.pem --> /usr/share/ca-certificates/37137252.pem (1708 bytes)
	I0701 15:09:11.501956 3906202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0701 15:09:11.530561 3906202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/3713725.pem --> /usr/share/ca-certificates/3713725.pem (1338 bytes)
	I0701 15:09:11.558286 3906202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0701 15:09:11.578798 3906202 ssh_runner.go:195] Run: openssl version
	I0701 15:09:11.585579 3906202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0701 15:09:11.596404 3906202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0701 15:09:11.600850 3906202 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  1 14:16 /usr/share/ca-certificates/minikubeCA.pem
	I0701 15:09:11.600910 3906202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0701 15:09:11.609070 3906202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0701 15:09:11.619308 3906202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3713725.pem && ln -fs /usr/share/ca-certificates/3713725.pem /etc/ssl/certs/3713725.pem"
	I0701 15:09:11.629872 3906202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3713725.pem
	I0701 15:09:11.634291 3906202 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  1 14:25 /usr/share/ca-certificates/3713725.pem
	I0701 15:09:11.634359 3906202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3713725.pem
	I0701 15:09:11.642541 3906202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3713725.pem /etc/ssl/certs/51391683.0"
	I0701 15:09:11.652346 3906202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/37137252.pem && ln -fs /usr/share/ca-certificates/37137252.pem /etc/ssl/certs/37137252.pem"
	I0701 15:09:11.667414 3906202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/37137252.pem
	I0701 15:09:11.676251 3906202 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  1 14:25 /usr/share/ca-certificates/37137252.pem
	I0701 15:09:11.676314 3906202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/37137252.pem
	I0701 15:09:11.691425 3906202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/37137252.pem /etc/ssl/certs/3ec20f2e.0"
	I0701 15:09:11.714305 3906202 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0701 15:09:11.718790 3906202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0701 15:09:11.726958 3906202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0701 15:09:11.735374 3906202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0701 15:09:11.743779 3906202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0701 15:09:11.752065 3906202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0701 15:09:11.760303 3906202 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0701 15:09:11.768255 3906202 kubeadm.go:391] StartCluster: {Name:old-k8s-version-474598 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-474598 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 15:09:11.768350 3906202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0701 15:09:11.768406 3906202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0701 15:09:11.832617 3906202 cri.go:89] found id: ""
	I0701 15:09:11.832685 3906202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0701 15:09:11.843543 3906202 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0701 15:09:11.843560 3906202 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0701 15:09:11.843565 3906202 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0701 15:09:11.843614 3906202 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0701 15:09:11.853507 3906202 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0701 15:09:11.853983 3906202 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-474598" does not appear in /home/jenkins/minikube-integration/19166-3708336/kubeconfig
	I0701 15:09:11.854132 3906202 kubeconfig.go:62] /home/jenkins/minikube-integration/19166-3708336/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-474598" cluster setting kubeconfig missing "old-k8s-version-474598" context setting]
	I0701 15:09:11.854514 3906202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/kubeconfig: {Name:mk4d5838a81c57a1d9ec9a509328664588dd34aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 15:09:11.855942 3906202 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0701 15:09:11.865908 3906202 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.76.2
	I0701 15:09:11.865976 3906202 kubeadm.go:591] duration metric: took 22.404851ms to restartPrimaryControlPlane
	I0701 15:09:11.865999 3906202 kubeadm.go:393] duration metric: took 97.752619ms to StartCluster
	I0701 15:09:11.866040 3906202 settings.go:142] acquiring lock: {Name:mke9008d6920f4be65eddeda5d60c738ed3823ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 15:09:11.866113 3906202 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19166-3708336/kubeconfig
	I0701 15:09:11.866748 3906202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/kubeconfig: {Name:mk4d5838a81c57a1d9ec9a509328664588dd34aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 15:09:11.866981 3906202 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0701 15:09:11.867344 3906202 config.go:182] Loaded profile config "old-k8s-version-474598": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0701 15:09:11.867410 3906202 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0701 15:09:11.867540 3906202 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-474598"
	I0701 15:09:11.867593 3906202 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-474598"
	W0701 15:09:11.867617 3906202 addons.go:243] addon storage-provisioner should already be in state true
	I0701 15:09:11.867772 3906202 host.go:66] Checking if "old-k8s-version-474598" exists ...
	I0701 15:09:11.868417 3906202 cli_runner.go:164] Run: docker container inspect old-k8s-version-474598 --format={{.State.Status}}
	I0701 15:09:11.868632 3906202 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-474598"
	I0701 15:09:11.868782 3906202 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-474598"
	I0701 15:09:11.868879 3906202 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-474598"
	I0701 15:09:11.868907 3906202 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-474598"
	W0701 15:09:11.868920 3906202 addons.go:243] addon metrics-server should already be in state true
	I0701 15:09:11.868957 3906202 host.go:66] Checking if "old-k8s-version-474598" exists ...
	I0701 15:09:11.869184 3906202 addons.go:69] Setting dashboard=true in profile "old-k8s-version-474598"
	I0701 15:09:11.869205 3906202 addons.go:234] Setting addon dashboard=true in "old-k8s-version-474598"
	W0701 15:09:11.869211 3906202 addons.go:243] addon dashboard should already be in state true
	I0701 15:09:11.869229 3906202 host.go:66] Checking if "old-k8s-version-474598" exists ...
	I0701 15:09:11.869380 3906202 cli_runner.go:164] Run: docker container inspect old-k8s-version-474598 --format={{.State.Status}}
	I0701 15:09:11.869587 3906202 cli_runner.go:164] Run: docker container inspect old-k8s-version-474598 --format={{.State.Status}}
	I0701 15:09:11.870222 3906202 out.go:177] * Verifying Kubernetes components...
	I0701 15:09:11.870434 3906202 cli_runner.go:164] Run: docker container inspect old-k8s-version-474598 --format={{.State.Status}}
	I0701 15:09:11.874138 3906202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0701 15:09:11.979559 3906202 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0701 15:09:11.982990 3906202 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0701 15:09:11.983013 3906202 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0701 15:09:11.983079 3906202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-474598
	I0701 15:09:11.988024 3906202 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-474598"
	W0701 15:09:11.988051 3906202 addons.go:243] addon default-storageclass should already be in state true
	I0701 15:09:11.988360 3906202 host.go:66] Checking if "old-k8s-version-474598" exists ...
	I0701 15:09:11.991337 3906202 cli_runner.go:164] Run: docker container inspect old-k8s-version-474598 --format={{.State.Status}}
	I0701 15:09:11.988630 3906202 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0701 15:09:11.997616 3906202 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0701 15:09:11.999689 3906202 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0701 15:09:11.999713 3906202 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0701 15:09:11.999812 3906202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-474598
	I0701 15:09:12.033359 3906202 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0701 15:09:12.037703 3906202 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 15:09:12.037725 3906202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0701 15:09:12.037904 3906202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-474598
	I0701 15:09:12.095199 3906202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/old-k8s-version-474598/id_rsa Username:docker}
	I0701 15:09:12.100417 3906202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/old-k8s-version-474598/id_rsa Username:docker}
	I0701 15:09:12.141204 3906202 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0701 15:09:12.141228 3906202 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0701 15:09:12.141293 3906202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-474598
	I0701 15:09:12.159207 3906202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/old-k8s-version-474598/id_rsa Username:docker}
	I0701 15:09:12.191788 3906202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34190 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/old-k8s-version-474598/id_rsa Username:docker}
	I0701 15:09:12.229378 3906202 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0701 15:09:12.267096 3906202 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-474598" to be "Ready" ...
	I0701 15:09:12.315389 3906202 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0701 15:09:12.315460 3906202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0701 15:09:12.340448 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 15:09:12.352455 3906202 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0701 15:09:12.352522 3906202 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0701 15:09:12.357546 3906202 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0701 15:09:12.357616 3906202 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0701 15:09:12.432286 3906202 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0701 15:09:12.432359 3906202 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0701 15:09:12.451905 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0701 15:09:12.468322 3906202 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 15:09:12.468368 3906202 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0701 15:09:12.558467 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 15:09:12.597468 3906202 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0701 15:09:12.597542 3906202 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0701 15:09:12.614590 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:12.614673 3906202 retry.go:31] will retry after 357.209433ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:12.647339 3906202 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0701 15:09:12.647408 3906202 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0701 15:09:12.739515 3906202 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0701 15:09:12.739586 3906202 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0701 15:09:12.808675 3906202 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0701 15:09:12.808747 3906202 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0701 15:09:12.812612 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:12.812678 3906202 retry.go:31] will retry after 345.536076ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0701 15:09:12.821501 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:12.821530 3906202 retry.go:31] will retry after 362.949228ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:12.834493 3906202 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0701 15:09:12.834564 3906202 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0701 15:09:12.853880 3906202 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0701 15:09:12.853901 3906202 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0701 15:09:12.872298 3906202 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 15:09:12.872365 3906202 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0701 15:09:12.891347 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 15:09:12.972736 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0701 15:09:12.985596 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:12.985677 3906202 retry.go:31] will retry after 297.375167ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0701 15:09:13.081182 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:13.081260 3906202 retry.go:31] will retry after 284.62434ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:13.159451 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0701 15:09:13.184836 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 15:09:13.283830 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0701 15:09:13.325195 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:13.325223 3906202 retry.go:31] will retry after 234.387438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:13.366446 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0701 15:09:13.403186 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:13.403262 3906202 retry.go:31] will retry after 383.173801ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0701 15:09:13.528717 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:13.528814 3906202 retry.go:31] will retry after 354.525254ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:13.559975 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0701 15:09:13.666441 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:13.666520 3906202 retry.go:31] will retry after 358.890286ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0701 15:09:13.731538 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:13.731618 3906202 retry.go:31] will retry after 659.086324ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:13.787365 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0701 15:09:13.882062 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:13.882092 3906202 retry.go:31] will retry after 705.441353ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:13.884406 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0701 15:09:13.984395 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:13.984433 3906202 retry.go:31] will retry after 316.135865ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:14.025701 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0701 15:09:14.124519 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:14.124599 3906202 retry.go:31] will retry after 960.14679ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:14.268385 3906202 node_ready.go:53] error getting node "old-k8s-version-474598": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-474598": dial tcp 192.168.76.2:8443: connect: connection refused
	I0701 15:09:14.301492 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 15:09:14.391772 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0701 15:09:14.427334 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:14.427364 3906202 retry.go:31] will retry after 1.17722557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0701 15:09:14.546447 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:14.546476 3906202 retry.go:31] will retry after 427.230173ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:14.587684 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0701 15:09:14.684019 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:14.684050 3906202 retry.go:31] will retry after 671.112521ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:14.973931 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0701 15:09:15.077463 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:15.077495 3906202 retry.go:31] will retry after 964.8899ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:15.085784 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0701 15:09:15.179580 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:15.179674 3906202 retry.go:31] will retry after 686.44516ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:15.355304 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0701 15:09:15.452630 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:15.452659 3906202 retry.go:31] will retry after 1.637853749s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:15.605166 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0701 15:09:15.686830 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:15.686864 3906202 retry.go:31] will retry after 1.672480075s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:15.866744 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0701 15:09:15.980658 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:15.980687 3906202 retry.go:31] will retry after 1.357492873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:16.042995 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0701 15:09:16.166003 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:16.166032 3906202 retry.go:31] will retry after 973.029141ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:16.767707 3906202 node_ready.go:53] error getting node "old-k8s-version-474598": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-474598": dial tcp 192.168.76.2:8443: connect: connection refused
	I0701 15:09:17.091282 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 15:09:17.139758 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0701 15:09:17.215107 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:17.215144 3906202 retry.go:31] will retry after 1.988228676s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0701 15:09:17.302930 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:17.302962 3906202 retry.go:31] will retry after 3.000188701s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:17.339345 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 15:09:17.359694 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0701 15:09:17.455106 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:17.455136 3906202 retry.go:31] will retry after 3.691289677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0701 15:09:17.509465 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:17.509557 3906202 retry.go:31] will retry after 2.237985683s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:18.768681 3906202 node_ready.go:53] error getting node "old-k8s-version-474598": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-474598": dial tcp 192.168.76.2:8443: connect: connection refused
	I0701 15:09:19.204252 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0701 15:09:19.322106 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:19.322155 3906202 retry.go:31] will retry after 1.528811418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:19.747781 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0701 15:09:19.870069 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:19.870099 3906202 retry.go:31] will retry after 4.216522237s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:20.303881 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0701 15:09:20.513547 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:20.513581 3906202 retry.go:31] will retry after 2.379794824s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0701 15:09:20.851860 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 15:09:21.147544 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 15:09:22.894177 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0701 15:09:24.087382 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0701 15:09:30.768558 3906202 node_ready.go:53] error getting node "old-k8s-version-474598": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-474598": net/http: TLS handshake timeout
	I0701 15:09:31.080339 3906202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.228425741s)
	W0701 15:09:31.080406 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0701 15:09:31.080425 3906202 retry.go:31] will retry after 4.385239598s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0701 15:09:31.306485 3906202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.158897481s)
	W0701 15:09:31.306521 3906202 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0701 15:09:31.306539 3906202 retry.go:31] will retry after 2.697456663s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0701 15:09:32.115541 3906202 node_ready.go:49] node "old-k8s-version-474598" has status "Ready":"True"
	I0701 15:09:32.115568 3906202 node_ready.go:38] duration metric: took 19.84838605s for node "old-k8s-version-474598" to be "Ready" ...
	I0701 15:09:32.115578 3906202 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 15:09:32.291389 3906202 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-6nqwr" in "kube-system" namespace to be "Ready" ...
	I0701 15:09:32.576951 3906202 pod_ready.go:92] pod "coredns-74ff55c5b-6nqwr" in "kube-system" namespace has status "Ready":"True"
	I0701 15:09:32.577042 3906202 pod_ready.go:81] duration metric: took 285.579767ms for pod "coredns-74ff55c5b-6nqwr" in "kube-system" namespace to be "Ready" ...
	I0701 15:09:32.577069 3906202 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-474598" in "kube-system" namespace to be "Ready" ...
	I0701 15:09:32.810058 3906202 pod_ready.go:92] pod "etcd-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"True"
	I0701 15:09:32.810133 3906202 pod_ready.go:81] duration metric: took 233.030566ms for pod "etcd-old-k8s-version-474598" in "kube-system" namespace to be "Ready" ...
	I0701 15:09:32.810168 3906202 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-474598" in "kube-system" namespace to be "Ready" ...
	I0701 15:09:33.051602 3906202 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"True"
	I0701 15:09:33.051679 3906202 pod_ready.go:81] duration metric: took 241.477041ms for pod "kube-apiserver-old-k8s-version-474598" in "kube-system" namespace to be "Ready" ...
	I0701 15:09:33.051706 3906202 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace to be "Ready" ...
	I0701 15:09:33.162911 3906202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.268691549s)
	I0701 15:09:34.004576 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0701 15:09:34.044833 3906202 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.957381544s)
	I0701 15:09:34.047307 3906202 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-474598 addons enable metrics-server
	
	I0701 15:09:35.063260 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:09:35.466175 3906202 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0701 15:09:36.375030 3906202 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-474598"
	I0701 15:09:36.376974 3906202 out.go:177] * Enabled addons: default-storageclass, dashboard, storage-provisioner, metrics-server
	I0701 15:09:36.378642 3906202 addons.go:510] duration metric: took 24.511227374s for enable addons: enabled=[default-storageclass dashboard storage-provisioner metrics-server]
	I0701 15:09:37.557804 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:09:39.558339 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:09:41.558822 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:09:44.058377 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:09:46.059155 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:09:48.063338 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:09:50.558137 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:09:52.615150 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:09:55.120189 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:09:57.572023 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:00.142509 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:02.565761 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:05.057576 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:07.059985 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:09.558332 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:11.558437 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:13.559934 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:16.058504 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:18.059181 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:20.558058 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:22.569860 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:25.058557 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:27.058892 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:29.559645 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:32.059335 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:34.558538 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:37.061118 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:39.061294 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:41.558049 3906202 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:44.058060 3906202 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"True"
	I0701 15:10:44.058086 3906202 pod_ready.go:81] duration metric: took 1m11.006359043s for pod "kube-controller-manager-old-k8s-version-474598" in "kube-system" namespace to be "Ready" ...
	I0701 15:10:44.058098 3906202 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-prspm" in "kube-system" namespace to be "Ready" ...
	I0701 15:10:44.063364 3906202 pod_ready.go:92] pod "kube-proxy-prspm" in "kube-system" namespace has status "Ready":"True"
	I0701 15:10:44.063397 3906202 pod_ready.go:81] duration metric: took 5.291149ms for pod "kube-proxy-prspm" in "kube-system" namespace to be "Ready" ...
	I0701 15:10:44.063411 3906202 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-474598" in "kube-system" namespace to be "Ready" ...
	I0701 15:10:46.070669 3906202 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:48.569239 3906202 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:50.569451 3906202 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:52.572695 3906202 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:54.070611 3906202 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-474598" in "kube-system" namespace has status "Ready":"True"
	I0701 15:10:54.070707 3906202 pod_ready.go:81] duration metric: took 10.007287393s for pod "kube-scheduler-old-k8s-version-474598" in "kube-system" namespace to be "Ready" ...
	I0701 15:10:54.070744 3906202 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace to be "Ready" ...
	I0701 15:10:56.079401 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:10:58.577888 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:01.077916 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:03.576820 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:05.577699 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:07.577964 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:10.077998 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:12.577103 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:15.078134 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:17.584150 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:19.587174 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:22.078160 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:24.079040 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:26.579218 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:29.077183 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:31.078355 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:33.576933 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:35.577405 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:38.078279 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:40.577719 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:43.148891 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:45.577503 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:48.077680 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:50.077973 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:52.082437 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:54.576971 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:57.076824 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:11:59.077453 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:01.577685 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:04.077180 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:06.576336 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:09.078151 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:11.078311 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:13.577047 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:15.578147 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:18.078193 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:20.078494 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:22.078748 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:24.577061 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:26.577155 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:29.078529 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:31.577158 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:33.591640 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:36.077644 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:38.077968 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:40.576579 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:43.076520 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:45.078699 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:47.577164 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:49.577441 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:52.077829 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:54.077924 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:56.078544 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:12:58.577401 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:01.077485 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:03.576429 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:05.578123 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:08.077803 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:10.577432 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:13.075853 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:15.080264 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:17.576825 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:19.586164 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:22.077877 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:24.577203 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:27.077351 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:29.077606 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:31.078489 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:33.577126 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:35.577527 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:38.076765 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:40.578894 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:43.077084 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:45.078975 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:47.577058 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:49.578738 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:52.077852 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:54.078069 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:56.575947 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:13:58.577348 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:01.078896 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:03.578211 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:06.077520 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:08.077803 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:10.576581 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:13.077515 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:15.085147 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:17.576991 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:19.584768 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:22.077378 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:24.576609 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:26.577209 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:28.577287 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:31.077898 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:33.577279 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:35.577336 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:38.078620 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:40.577043 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:42.582701 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:45.077960 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:47.577525 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:49.577576 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:51.584631 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:54.077742 3906202 pod_ready.go:102] pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace has status "Ready":"False"
	I0701 15:14:54.077779 3906202 pod_ready.go:81] duration metric: took 4m0.006996429s for pod "metrics-server-9975d5f86-99tkb" in "kube-system" namespace to be "Ready" ...
	E0701 15:14:54.077790 3906202 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0701 15:14:54.077799 3906202 pod_ready.go:38] duration metric: took 5m21.962210827s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0701 15:14:54.077812 3906202 api_server.go:52] waiting for apiserver process to appear ...
	I0701 15:14:54.077842 3906202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0701 15:14:54.077907 3906202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 15:14:54.119528 3906202 cri.go:89] found id: "29ff1e584547a3f1954c3b3bc8d86133f9f8821165607c401129fbb1ad25343b"
	I0701 15:14:54.119554 3906202 cri.go:89] found id: ""
	I0701 15:14:54.119563 3906202 logs.go:276] 1 containers: [29ff1e584547a3f1954c3b3bc8d86133f9f8821165607c401129fbb1ad25343b]
	I0701 15:14:54.119623 3906202 ssh_runner.go:195] Run: which crictl
	I0701 15:14:54.123844 3906202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0701 15:14:54.123924 3906202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 15:14:54.162439 3906202 cri.go:89] found id: "8937951752f8cf91f00237b6ccb23193fd6ae6e0c75a210a7eb01e45df33434f"
	I0701 15:14:54.162470 3906202 cri.go:89] found id: ""
	I0701 15:14:54.162479 3906202 logs.go:276] 1 containers: [8937951752f8cf91f00237b6ccb23193fd6ae6e0c75a210a7eb01e45df33434f]
	I0701 15:14:54.162537 3906202 ssh_runner.go:195] Run: which crictl
	I0701 15:14:54.166433 3906202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0701 15:14:54.166510 3906202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 15:14:54.211577 3906202 cri.go:89] found id: "585eb048d28eef3f91142d493ebacd44932dc6beaeb62efc44eef6e21a027d29"
	I0701 15:14:54.211605 3906202 cri.go:89] found id: ""
	I0701 15:14:54.211613 3906202 logs.go:276] 1 containers: [585eb048d28eef3f91142d493ebacd44932dc6beaeb62efc44eef6e21a027d29]
	I0701 15:14:54.211672 3906202 ssh_runner.go:195] Run: which crictl
	I0701 15:14:54.215370 3906202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0701 15:14:54.215444 3906202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 15:14:54.255491 3906202 cri.go:89] found id: "99b47a1789a53fcc22fad9c608f7e9a89470909c3bed1f74b857b5da84b94f8c"
	I0701 15:14:54.255520 3906202 cri.go:89] found id: ""
	I0701 15:14:54.255528 3906202 logs.go:276] 1 containers: [99b47a1789a53fcc22fad9c608f7e9a89470909c3bed1f74b857b5da84b94f8c]
	I0701 15:14:54.255585 3906202 ssh_runner.go:195] Run: which crictl
	I0701 15:14:54.259420 3906202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0701 15:14:54.259493 3906202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 15:14:54.297278 3906202 cri.go:89] found id: "4f612ce98e504e45b0b7d45ab196646d112950fff7af0818ed6b6ae20f451730"
	I0701 15:14:54.297301 3906202 cri.go:89] found id: ""
	I0701 15:14:54.297309 3906202 logs.go:276] 1 containers: [4f612ce98e504e45b0b7d45ab196646d112950fff7af0818ed6b6ae20f451730]
	I0701 15:14:54.297364 3906202 ssh_runner.go:195] Run: which crictl
	I0701 15:14:54.300930 3906202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 15:14:54.300999 3906202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 15:14:54.341713 3906202 cri.go:89] found id: "6f249a20156ffcc8d1b05a5a0133a0476123eab1338f65400a301afe0851c461"
	I0701 15:14:54.341737 3906202 cri.go:89] found id: ""
	I0701 15:14:54.341746 3906202 logs.go:276] 1 containers: [6f249a20156ffcc8d1b05a5a0133a0476123eab1338f65400a301afe0851c461]
	I0701 15:14:54.341801 3906202 ssh_runner.go:195] Run: which crictl
	I0701 15:14:54.345546 3906202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0701 15:14:54.345633 3906202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0701 15:14:54.394185 3906202 cri.go:89] found id: "d6c47f5e5c008f8c4904f4fab278a0c43bd06a808a8fc9f67cc24c2e47316d28"
	I0701 15:14:54.394216 3906202 cri.go:89] found id: ""
	I0701 15:14:54.394225 3906202 logs.go:276] 1 containers: [d6c47f5e5c008f8c4904f4fab278a0c43bd06a808a8fc9f67cc24c2e47316d28]
	I0701 15:14:54.394321 3906202 ssh_runner.go:195] Run: which crictl
	I0701 15:14:54.398017 3906202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0701 15:14:54.398133 3906202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 15:14:54.438425 3906202 cri.go:89] found id: "25c8776f1771df2532ca9cb51b3c40a3778154fec4f38ce0727c6e4b29adc787"
	I0701 15:14:54.438448 3906202 cri.go:89] found id: ""
	I0701 15:14:54.438456 3906202 logs.go:276] 1 containers: [25c8776f1771df2532ca9cb51b3c40a3778154fec4f38ce0727c6e4b29adc787]
	I0701 15:14:54.438543 3906202 ssh_runner.go:195] Run: which crictl
	I0701 15:14:54.441956 3906202 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 15:14:54.442024 3906202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 15:14:54.487181 3906202 cri.go:89] found id: "870578f023cca426ce7d3f51bb2af8cb79612ae25002a652dcbcb30bc1690ed1"
	I0701 15:14:54.487201 3906202 cri.go:89] found id: ""
	I0701 15:14:54.487208 3906202 logs.go:276] 1 containers: [870578f023cca426ce7d3f51bb2af8cb79612ae25002a652dcbcb30bc1690ed1]
	I0701 15:14:54.487262 3906202 ssh_runner.go:195] Run: which crictl
	I0701 15:14:54.490937 3906202 logs.go:123] Gathering logs for dmesg ...
	I0701 15:14:54.490964 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 15:14:54.511878 3906202 logs.go:123] Gathering logs for coredns [585eb048d28eef3f91142d493ebacd44932dc6beaeb62efc44eef6e21a027d29] ...
	I0701 15:14:54.511921 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585eb048d28eef3f91142d493ebacd44932dc6beaeb62efc44eef6e21a027d29"
	I0701 15:14:54.553470 3906202 logs.go:123] Gathering logs for kindnet [d6c47f5e5c008f8c4904f4fab278a0c43bd06a808a8fc9f67cc24c2e47316d28] ...
	I0701 15:14:54.553498 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6c47f5e5c008f8c4904f4fab278a0c43bd06a808a8fc9f67cc24c2e47316d28"
	I0701 15:14:54.602162 3906202 logs.go:123] Gathering logs for container status ...
	I0701 15:14:54.602218 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 15:14:54.653815 3906202 logs.go:123] Gathering logs for describe nodes ...
	I0701 15:14:54.653846 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 15:14:54.882785 3906202 logs.go:123] Gathering logs for kube-apiserver [29ff1e584547a3f1954c3b3bc8d86133f9f8821165607c401129fbb1ad25343b] ...
	I0701 15:14:54.882821 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29ff1e584547a3f1954c3b3bc8d86133f9f8821165607c401129fbb1ad25343b"
	I0701 15:14:54.953309 3906202 logs.go:123] Gathering logs for kube-scheduler [99b47a1789a53fcc22fad9c608f7e9a89470909c3bed1f74b857b5da84b94f8c] ...
	I0701 15:14:54.953342 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99b47a1789a53fcc22fad9c608f7e9a89470909c3bed1f74b857b5da84b94f8c"
	I0701 15:14:54.999177 3906202 logs.go:123] Gathering logs for kube-controller-manager [6f249a20156ffcc8d1b05a5a0133a0476123eab1338f65400a301afe0851c461] ...
	I0701 15:14:54.999207 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f249a20156ffcc8d1b05a5a0133a0476123eab1338f65400a301afe0851c461"
	I0701 15:14:55.072749 3906202 logs.go:123] Gathering logs for storage-provisioner [25c8776f1771df2532ca9cb51b3c40a3778154fec4f38ce0727c6e4b29adc787] ...
	I0701 15:14:55.072785 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25c8776f1771df2532ca9cb51b3c40a3778154fec4f38ce0727c6e4b29adc787"
	I0701 15:14:55.113887 3906202 logs.go:123] Gathering logs for kubelet ...
	I0701 15:14:55.113914 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 15:14:55.167005 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:32 old-k8s-version-474598 kubelet[731]: E0701 15:09:32.099443     731 reflector.go:138] object-"kube-system"/"kube-proxy-token-klmzs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-klmzs" is forbidden: User "system:node:old-k8s-version-474598" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-474598' and this object
	W0701 15:14:55.167235 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:32 old-k8s-version-474598 kubelet[731]: E0701 15:09:32.099649     731 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-474598" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-474598' and this object
	W0701 15:14:55.167449 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:32 old-k8s-version-474598 kubelet[731]: E0701 15:09:32.113876     731 reflector.go:138] object-"kube-system"/"kindnet-token-9sd5n": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-9sd5n" is forbidden: User "system:node:old-k8s-version-474598" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-474598' and this object
	W0701 15:14:55.167676 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:32 old-k8s-version-474598 kubelet[731]: E0701 15:09:32.114117     731 reflector.go:138] object-"kube-system"/"metrics-server-token-tnwnp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-tnwnp" is forbidden: User "system:node:old-k8s-version-474598" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-474598' and this object
	W0701 15:14:55.167907 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:32 old-k8s-version-474598 kubelet[731]: E0701 15:09:32.114192     731 reflector.go:138] object-"kube-system"/"storage-provisioner-token-r599k": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-r599k" is forbidden: User "system:node:old-k8s-version-474598" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-474598' and this object
	W0701 15:14:55.168138 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:32 old-k8s-version-474598 kubelet[731]: E0701 15:09:32.114241     731 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-474598" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-474598' and this object
	W0701 15:14:55.168355 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:32 old-k8s-version-474598 kubelet[731]: E0701 15:09:32.114287     731 reflector.go:138] object-"kube-system"/"coredns-token-n8gzt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-n8gzt" is forbidden: User "system:node:old-k8s-version-474598" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-474598' and this object
	W0701 15:14:55.168565 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:32 old-k8s-version-474598 kubelet[731]: E0701 15:09:32.114343     731 reflector.go:138] object-"default"/"default-token-x4wpk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-x4wpk" is forbidden: User "system:node:old-k8s-version-474598" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-474598' and this object
	W0701 15:14:55.176812 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:33 old-k8s-version-474598 kubelet[731]: E0701 15:09:33.615070     731 pod_workers.go:191] Error syncing pod 6efc2390-ffa6-4d25-bc86-2270ae775d16 ("storage-provisioner_kube-system(6efc2390-ffa6-4d25-bc86-2270ae775d16)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0701 15:14:55.177809 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:33 old-k8s-version-474598 kubelet[731]: E0701 15:09:33.671634     731 pod_workers.go:191] Error syncing pod 6efc2390-ffa6-4d25-bc86-2270ae775d16 ("storage-provisioner_kube-system(6efc2390-ffa6-4d25-bc86-2270ae775d16)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0701 15:14:55.179042 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:34 old-k8s-version-474598 kubelet[731]: E0701 15:09:34.307484     731 pod_workers.go:191] Error syncing pod 91e015d1-1afc-4016-8924-d4032065550c ("busybox_default(91e015d1-1afc-4016-8924-d4032065550c)"), skipping: failed to "StartContainer" for "busybox" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0701 15:14:55.180068 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:34 old-k8s-version-474598 kubelet[731]: E0701 15:09:34.687641     731 pod_workers.go:191] Error syncing pod 91e015d1-1afc-4016-8924-d4032065550c ("busybox_default(91e015d1-1afc-4016-8924-d4032065550c)"), skipping: failed to "StartContainer" for "busybox" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0701 15:14:55.181728 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:34 old-k8s-version-474598 kubelet[731]: E0701 15:09:34.820470     731 pod_workers.go:191] Error syncing pod f29baf52-c4df-4915-b79f-078a24cb4a9f ("kindnet-4k4lt_kube-system(f29baf52-c4df-4915-b79f-078a24cb4a9f)"), skipping: failed to "StartContainer" for "kindnet-cni" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0701 15:14:55.185623 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:34 old-k8s-version-474598 kubelet[731]: E0701 15:09:34.917343     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0701 15:14:55.187449 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:35 old-k8s-version-474598 kubelet[731]: E0701 15:09:35.689857     731 pod_workers.go:191] Error syncing pod f29baf52-c4df-4915-b79f-078a24cb4a9f ("kindnet-4k4lt_kube-system(f29baf52-c4df-4915-b79f-078a24cb4a9f)"), skipping: failed to "StartContainer" for "kindnet-cni" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0701 15:14:55.187638 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:35 old-k8s-version-474598 kubelet[731]: E0701 15:09:35.698534     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.190105 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:48 old-k8s-version-474598 kubelet[731]: E0701 15:09:48.631681     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0701 15:14:55.191581 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:59 old-k8s-version-474598 kubelet[731]: E0701 15:09:59.621836     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.191907 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:59 old-k8s-version-474598 kubelet[731]: E0701 15:09:59.941603     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.192378 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:00 old-k8s-version-474598 kubelet[731]: E0701 15:10:00.943885     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.192718 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:01 old-k8s-version-474598 kubelet[731]: E0701 15:10:01.945146     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.194795 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:13 old-k8s-version-474598 kubelet[731]: E0701 15:10:13.631502     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0701 15:14:55.195400 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:15 old-k8s-version-474598 kubelet[731]: E0701 15:10:15.965751     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.195728 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:20 old-k8s-version-474598 kubelet[731]: E0701 15:10:20.353758     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.195912 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:27 old-k8s-version-474598 kubelet[731]: E0701 15:10:27.621155     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.196240 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:32 old-k8s-version-474598 kubelet[731]: E0701 15:10:32.620846     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.196425 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:42 old-k8s-version-474598 kubelet[731]: E0701 15:10:42.621573     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.197007 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:47 old-k8s-version-474598 kubelet[731]: E0701 15:10:47.017959     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.197338 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:50 old-k8s-version-474598 kubelet[731]: E0701 15:10:50.358024     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.199400 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:54 old-k8s-version-474598 kubelet[731]: E0701 15:10:54.630563     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0701 15:14:55.199724 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:01 old-k8s-version-474598 kubelet[731]: E0701 15:11:01.621361     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.199909 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:07 old-k8s-version-474598 kubelet[731]: E0701 15:11:07.623694     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.200233 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:16 old-k8s-version-474598 kubelet[731]: E0701 15:11:16.620700     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.200418 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:18 old-k8s-version-474598 kubelet[731]: E0701 15:11:18.621707     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.201018 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:28 old-k8s-version-474598 kubelet[731]: E0701 15:11:28.082520     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.201201 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:29 old-k8s-version-474598 kubelet[731]: E0701 15:11:29.621991     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.201526 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:30 old-k8s-version-474598 kubelet[731]: E0701 15:11:30.353335     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.201713 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:41 old-k8s-version-474598 kubelet[731]: E0701 15:11:41.622209     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.202038 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:42 old-k8s-version-474598 kubelet[731]: E0701 15:11:42.620768     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.202247 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:56 old-k8s-version-474598 kubelet[731]: E0701 15:11:56.621364     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.202609 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:56 old-k8s-version-474598 kubelet[731]: E0701 15:11:56.622121     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.202795 3906202 logs.go:138] Found kubelet problem: Jul 01 15:12:07 old-k8s-version-474598 kubelet[731]: E0701 15:12:07.622340     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.203121 3906202 logs.go:138] Found kubelet problem: Jul 01 15:12:08 old-k8s-version-474598 kubelet[731]: E0701 15:12:08.621109     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.203962 3906202 logs.go:138] Found kubelet problem: Jul 01 15:12:20 old-k8s-version-474598 kubelet[731]: E0701 15:12:20.620702     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.206012 3906202 logs.go:138] Found kubelet problem: Jul 01 15:12:20 old-k8s-version-474598 kubelet[731]: E0701 15:12:20.631313     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0701 15:14:55.206345 3906202 logs.go:138] Found kubelet problem: Jul 01 15:12:34 old-k8s-version-474598 kubelet[731]: E0701 15:12:34.620973     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.206529 3906202 logs.go:138] Found kubelet problem: Jul 01 15:12:34 old-k8s-version-474598 kubelet[731]: E0701 15:12:34.622005     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.206754 3906202 logs.go:138] Found kubelet problem: Jul 01 15:12:48 old-k8s-version-474598 kubelet[731]: E0701 15:12:48.621512     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.207342 3906202 logs.go:138] Found kubelet problem: Jul 01 15:12:51 old-k8s-version-474598 kubelet[731]: E0701 15:12:51.233147     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.207669 3906202 logs.go:138] Found kubelet problem: Jul 01 15:13:00 old-k8s-version-474598 kubelet[731]: E0701 15:13:00.353139     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.207856 3906202 logs.go:138] Found kubelet problem: Jul 01 15:13:00 old-k8s-version-474598 kubelet[731]: E0701 15:13:00.621289     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.208183 3906202 logs.go:138] Found kubelet problem: Jul 01 15:13:12 old-k8s-version-474598 kubelet[731]: E0701 15:13:12.621008     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.208369 3906202 logs.go:138] Found kubelet problem: Jul 01 15:13:12 old-k8s-version-474598 kubelet[731]: E0701 15:13:12.621572     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.208556 3906202 logs.go:138] Found kubelet problem: Jul 01 15:13:24 old-k8s-version-474598 kubelet[731]: E0701 15:13:24.621230     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.208994 3906202 logs.go:138] Found kubelet problem: Jul 01 15:13:25 old-k8s-version-474598 kubelet[731]: E0701 15:13:25.620763     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.209195 3906202 logs.go:138] Found kubelet problem: Jul 01 15:13:37 old-k8s-version-474598 kubelet[731]: E0701 15:13:37.621574     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.209522 3906202 logs.go:138] Found kubelet problem: Jul 01 15:13:39 old-k8s-version-474598 kubelet[731]: E0701 15:13:39.621765     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.209705 3906202 logs.go:138] Found kubelet problem: Jul 01 15:13:50 old-k8s-version-474598 kubelet[731]: E0701 15:13:50.621437     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.210097 3906202 logs.go:138] Found kubelet problem: Jul 01 15:13:54 old-k8s-version-474598 kubelet[731]: E0701 15:13:54.620755     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.210286 3906202 logs.go:138] Found kubelet problem: Jul 01 15:14:01 old-k8s-version-474598 kubelet[731]: E0701 15:14:01.621745     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.210618 3906202 logs.go:138] Found kubelet problem: Jul 01 15:14:07 old-k8s-version-474598 kubelet[731]: E0701 15:14:07.620754     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.210809 3906202 logs.go:138] Found kubelet problem: Jul 01 15:14:16 old-k8s-version-474598 kubelet[731]: E0701 15:14:16.622062     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.211382 3906202 logs.go:138] Found kubelet problem: Jul 01 15:14:22 old-k8s-version-474598 kubelet[731]: E0701 15:14:22.620813     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.211567 3906202 logs.go:138] Found kubelet problem: Jul 01 15:14:31 old-k8s-version-474598 kubelet[731]: E0701 15:14:31.621391     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.211892 3906202 logs.go:138] Found kubelet problem: Jul 01 15:14:33 old-k8s-version-474598 kubelet[731]: E0701 15:14:33.620802     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.212077 3906202 logs.go:138] Found kubelet problem: Jul 01 15:14:44 old-k8s-version-474598 kubelet[731]: E0701 15:14:44.621361     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.212401 3906202 logs.go:138] Found kubelet problem: Jul 01 15:14:48 old-k8s-version-474598 kubelet[731]: E0701 15:14:48.620740     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	I0701 15:14:55.212412 3906202 logs.go:123] Gathering logs for CRI-O ...
	I0701 15:14:55.212426 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0701 15:14:55.302737 3906202 logs.go:123] Gathering logs for etcd [8937951752f8cf91f00237b6ccb23193fd6ae6e0c75a210a7eb01e45df33434f] ...
	I0701 15:14:55.302775 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8937951752f8cf91f00237b6ccb23193fd6ae6e0c75a210a7eb01e45df33434f"
	I0701 15:14:55.347870 3906202 logs.go:123] Gathering logs for kube-proxy [4f612ce98e504e45b0b7d45ab196646d112950fff7af0818ed6b6ae20f451730] ...
	I0701 15:14:55.347898 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f612ce98e504e45b0b7d45ab196646d112950fff7af0818ed6b6ae20f451730"
	I0701 15:14:55.398009 3906202 logs.go:123] Gathering logs for kubernetes-dashboard [870578f023cca426ce7d3f51bb2af8cb79612ae25002a652dcbcb30bc1690ed1] ...
	I0701 15:14:55.398046 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 870578f023cca426ce7d3f51bb2af8cb79612ae25002a652dcbcb30bc1690ed1"
	I0701 15:14:55.452518 3906202 out.go:304] Setting ErrFile to fd 2...
	I0701 15:14:55.452598 3906202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0701 15:14:55.452657 3906202 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0701 15:14:55.452672 3906202 out.go:239]   Jul 01 15:14:22 old-k8s-version-474598 kubelet[731]: E0701 15:14:22.620813     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	  Jul 01 15:14:22 old-k8s-version-474598 kubelet[731]: E0701 15:14:22.620813     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.452685 3906202 out.go:239]   Jul 01 15:14:31 old-k8s-version-474598 kubelet[731]: E0701 15:14:31.621391     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 01 15:14:31 old-k8s-version-474598 kubelet[731]: E0701 15:14:31.621391     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.452698 3906202 out.go:239]   Jul 01 15:14:33 old-k8s-version-474598 kubelet[731]: E0701 15:14:33.620802     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	  Jul 01 15:14:33 old-k8s-version-474598 kubelet[731]: E0701 15:14:33.620802     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:14:55.452827 3906202 out.go:239]   Jul 01 15:14:44 old-k8s-version-474598 kubelet[731]: E0701 15:14:44.621361     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 01 15:14:44 old-k8s-version-474598 kubelet[731]: E0701 15:14:44.621361     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:14:55.452844 3906202 out.go:239]   Jul 01 15:14:48 old-k8s-version-474598 kubelet[731]: E0701 15:14:48.620740     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	  Jul 01 15:14:48 old-k8s-version-474598 kubelet[731]: E0701 15:14:48.620740     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	I0701 15:14:55.452851 3906202 out.go:304] Setting ErrFile to fd 2...
	I0701 15:14:55.452858 3906202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 15:15:05.454233 3906202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 15:15:05.473885 3906202 api_server.go:72] duration metric: took 5m53.606849813s to wait for apiserver process to appear ...
	I0701 15:15:05.473914 3906202 api_server.go:88] waiting for apiserver healthz status ...
	I0701 15:15:05.473951 3906202 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0701 15:15:05.474013 3906202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0701 15:15:05.524560 3906202 cri.go:89] found id: "29ff1e584547a3f1954c3b3bc8d86133f9f8821165607c401129fbb1ad25343b"
	I0701 15:15:05.524587 3906202 cri.go:89] found id: ""
	I0701 15:15:05.524596 3906202 logs.go:276] 1 containers: [29ff1e584547a3f1954c3b3bc8d86133f9f8821165607c401129fbb1ad25343b]
	I0701 15:15:05.524652 3906202 ssh_runner.go:195] Run: which crictl
	I0701 15:15:05.529846 3906202 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0701 15:15:05.529919 3906202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0701 15:15:05.600339 3906202 cri.go:89] found id: "8937951752f8cf91f00237b6ccb23193fd6ae6e0c75a210a7eb01e45df33434f"
	I0701 15:15:05.600365 3906202 cri.go:89] found id: ""
	I0701 15:15:05.600374 3906202 logs.go:276] 1 containers: [8937951752f8cf91f00237b6ccb23193fd6ae6e0c75a210a7eb01e45df33434f]
	I0701 15:15:05.600428 3906202 ssh_runner.go:195] Run: which crictl
	I0701 15:15:05.604351 3906202 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0701 15:15:05.604425 3906202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0701 15:15:05.664812 3906202 cri.go:89] found id: "585eb048d28eef3f91142d493ebacd44932dc6beaeb62efc44eef6e21a027d29"
	I0701 15:15:05.664837 3906202 cri.go:89] found id: ""
	I0701 15:15:05.664845 3906202 logs.go:276] 1 containers: [585eb048d28eef3f91142d493ebacd44932dc6beaeb62efc44eef6e21a027d29]
	I0701 15:15:05.664902 3906202 ssh_runner.go:195] Run: which crictl
	I0701 15:15:05.669275 3906202 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0701 15:15:05.669349 3906202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0701 15:15:05.728723 3906202 cri.go:89] found id: "99b47a1789a53fcc22fad9c608f7e9a89470909c3bed1f74b857b5da84b94f8c"
	I0701 15:15:05.728749 3906202 cri.go:89] found id: ""
	I0701 15:15:05.728758 3906202 logs.go:276] 1 containers: [99b47a1789a53fcc22fad9c608f7e9a89470909c3bed1f74b857b5da84b94f8c]
	I0701 15:15:05.728814 3906202 ssh_runner.go:195] Run: which crictl
	I0701 15:15:05.733409 3906202 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0701 15:15:05.733482 3906202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0701 15:15:05.784731 3906202 cri.go:89] found id: "4f612ce98e504e45b0b7d45ab196646d112950fff7af0818ed6b6ae20f451730"
	I0701 15:15:05.784755 3906202 cri.go:89] found id: ""
	I0701 15:15:05.784764 3906202 logs.go:276] 1 containers: [4f612ce98e504e45b0b7d45ab196646d112950fff7af0818ed6b6ae20f451730]
	I0701 15:15:05.784822 3906202 ssh_runner.go:195] Run: which crictl
	I0701 15:15:05.789125 3906202 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0701 15:15:05.789194 3906202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0701 15:15:05.839383 3906202 cri.go:89] found id: "6f249a20156ffcc8d1b05a5a0133a0476123eab1338f65400a301afe0851c461"
	I0701 15:15:05.839407 3906202 cri.go:89] found id: ""
	I0701 15:15:05.839416 3906202 logs.go:276] 1 containers: [6f249a20156ffcc8d1b05a5a0133a0476123eab1338f65400a301afe0851c461]
	I0701 15:15:05.839469 3906202 ssh_runner.go:195] Run: which crictl
	I0701 15:15:05.851445 3906202 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0701 15:15:05.851531 3906202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0701 15:15:05.905999 3906202 cri.go:89] found id: "d6c47f5e5c008f8c4904f4fab278a0c43bd06a808a8fc9f67cc24c2e47316d28"
	I0701 15:15:05.906020 3906202 cri.go:89] found id: ""
	I0701 15:15:05.906028 3906202 logs.go:276] 1 containers: [d6c47f5e5c008f8c4904f4fab278a0c43bd06a808a8fc9f67cc24c2e47316d28]
	I0701 15:15:05.906083 3906202 ssh_runner.go:195] Run: which crictl
	I0701 15:15:05.910640 3906202 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0701 15:15:05.910712 3906202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0701 15:15:05.976594 3906202 cri.go:89] found id: "870578f023cca426ce7d3f51bb2af8cb79612ae25002a652dcbcb30bc1690ed1"
	I0701 15:15:05.976616 3906202 cri.go:89] found id: ""
	I0701 15:15:05.976624 3906202 logs.go:276] 1 containers: [870578f023cca426ce7d3f51bb2af8cb79612ae25002a652dcbcb30bc1690ed1]
	I0701 15:15:05.976682 3906202 ssh_runner.go:195] Run: which crictl
	I0701 15:15:05.983254 3906202 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0701 15:15:05.983327 3906202 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0701 15:15:06.043175 3906202 cri.go:89] found id: "25c8776f1771df2532ca9cb51b3c40a3778154fec4f38ce0727c6e4b29adc787"
	I0701 15:15:06.043193 3906202 cri.go:89] found id: ""
	I0701 15:15:06.043201 3906202 logs.go:276] 1 containers: [25c8776f1771df2532ca9cb51b3c40a3778154fec4f38ce0727c6e4b29adc787]
	I0701 15:15:06.043254 3906202 ssh_runner.go:195] Run: which crictl
	I0701 15:15:06.047697 3906202 logs.go:123] Gathering logs for kube-scheduler [99b47a1789a53fcc22fad9c608f7e9a89470909c3bed1f74b857b5da84b94f8c] ...
	I0701 15:15:06.047732 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99b47a1789a53fcc22fad9c608f7e9a89470909c3bed1f74b857b5da84b94f8c"
	I0701 15:15:06.119383 3906202 logs.go:123] Gathering logs for kube-proxy [4f612ce98e504e45b0b7d45ab196646d112950fff7af0818ed6b6ae20f451730] ...
	I0701 15:15:06.119415 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f612ce98e504e45b0b7d45ab196646d112950fff7af0818ed6b6ae20f451730"
	I0701 15:15:06.194064 3906202 logs.go:123] Gathering logs for kindnet [d6c47f5e5c008f8c4904f4fab278a0c43bd06a808a8fc9f67cc24c2e47316d28] ...
	I0701 15:15:06.194089 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6c47f5e5c008f8c4904f4fab278a0c43bd06a808a8fc9f67cc24c2e47316d28"
	I0701 15:15:06.249549 3906202 logs.go:123] Gathering logs for storage-provisioner [25c8776f1771df2532ca9cb51b3c40a3778154fec4f38ce0727c6e4b29adc787] ...
	I0701 15:15:06.249627 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25c8776f1771df2532ca9cb51b3c40a3778154fec4f38ce0727c6e4b29adc787"
	I0701 15:15:06.315267 3906202 logs.go:123] Gathering logs for etcd [8937951752f8cf91f00237b6ccb23193fd6ae6e0c75a210a7eb01e45df33434f] ...
	I0701 15:15:06.315293 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8937951752f8cf91f00237b6ccb23193fd6ae6e0c75a210a7eb01e45df33434f"
	I0701 15:15:06.391378 3906202 logs.go:123] Gathering logs for kubernetes-dashboard [870578f023cca426ce7d3f51bb2af8cb79612ae25002a652dcbcb30bc1690ed1] ...
	I0701 15:15:06.391421 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 870578f023cca426ce7d3f51bb2af8cb79612ae25002a652dcbcb30bc1690ed1"
	I0701 15:15:06.455727 3906202 logs.go:123] Gathering logs for CRI-O ...
	I0701 15:15:06.455753 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0701 15:15:06.566923 3906202 logs.go:123] Gathering logs for container status ...
	I0701 15:15:06.566998 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0701 15:15:06.633327 3906202 logs.go:123] Gathering logs for coredns [585eb048d28eef3f91142d493ebacd44932dc6beaeb62efc44eef6e21a027d29] ...
	I0701 15:15:06.633361 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 585eb048d28eef3f91142d493ebacd44932dc6beaeb62efc44eef6e21a027d29"
	I0701 15:15:06.710802 3906202 logs.go:123] Gathering logs for dmesg ...
	I0701 15:15:06.710830 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0701 15:15:06.734209 3906202 logs.go:123] Gathering logs for describe nodes ...
	I0701 15:15:06.734303 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0701 15:15:06.935303 3906202 logs.go:123] Gathering logs for kube-apiserver [29ff1e584547a3f1954c3b3bc8d86133f9f8821165607c401129fbb1ad25343b] ...
	I0701 15:15:06.935342 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29ff1e584547a3f1954c3b3bc8d86133f9f8821165607c401129fbb1ad25343b"
	I0701 15:15:07.053963 3906202 logs.go:123] Gathering logs for kubelet ...
	I0701 15:15:07.054001 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0701 15:15:07.123967 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:32 old-k8s-version-474598 kubelet[731]: E0701 15:09:32.099443     731 reflector.go:138] object-"kube-system"/"kube-proxy-token-klmzs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-klmzs" is forbidden: User "system:node:old-k8s-version-474598" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-474598' and this object
	W0701 15:15:07.124190 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:32 old-k8s-version-474598 kubelet[731]: E0701 15:09:32.099649     731 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-474598" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-474598' and this object
	W0701 15:15:07.124402 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:32 old-k8s-version-474598 kubelet[731]: E0701 15:09:32.113876     731 reflector.go:138] object-"kube-system"/"kindnet-token-9sd5n": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-9sd5n" is forbidden: User "system:node:old-k8s-version-474598" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-474598' and this object
	W0701 15:15:07.124626 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:32 old-k8s-version-474598 kubelet[731]: E0701 15:09:32.114117     731 reflector.go:138] object-"kube-system"/"metrics-server-token-tnwnp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-tnwnp" is forbidden: User "system:node:old-k8s-version-474598" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-474598' and this object
	W0701 15:15:07.124851 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:32 old-k8s-version-474598 kubelet[731]: E0701 15:09:32.114192     731 reflector.go:138] object-"kube-system"/"storage-provisioner-token-r599k": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-r599k" is forbidden: User "system:node:old-k8s-version-474598" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-474598' and this object
	W0701 15:15:07.125241 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:32 old-k8s-version-474598 kubelet[731]: E0701 15:09:32.114241     731 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-474598" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-474598' and this object
	W0701 15:15:07.125462 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:32 old-k8s-version-474598 kubelet[731]: E0701 15:09:32.114287     731 reflector.go:138] object-"kube-system"/"coredns-token-n8gzt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-n8gzt" is forbidden: User "system:node:old-k8s-version-474598" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-474598' and this object
	W0701 15:15:07.125669 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:32 old-k8s-version-474598 kubelet[731]: E0701 15:09:32.114343     731 reflector.go:138] object-"default"/"default-token-x4wpk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-x4wpk" is forbidden: User "system:node:old-k8s-version-474598" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-474598' and this object
	W0701 15:15:07.139487 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:33 old-k8s-version-474598 kubelet[731]: E0701 15:09:33.615070     731 pod_workers.go:191] Error syncing pod 6efc2390-ffa6-4d25-bc86-2270ae775d16 ("storage-provisioner_kube-system(6efc2390-ffa6-4d25-bc86-2270ae775d16)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0701 15:15:07.140427 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:33 old-k8s-version-474598 kubelet[731]: E0701 15:09:33.671634     731 pod_workers.go:191] Error syncing pod 6efc2390-ffa6-4d25-bc86-2270ae775d16 ("storage-provisioner_kube-system(6efc2390-ffa6-4d25-bc86-2270ae775d16)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0701 15:15:07.141999 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:34 old-k8s-version-474598 kubelet[731]: E0701 15:09:34.307484     731 pod_workers.go:191] Error syncing pod 91e015d1-1afc-4016-8924-d4032065550c ("busybox_default(91e015d1-1afc-4016-8924-d4032065550c)"), skipping: failed to "StartContainer" for "busybox" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0701 15:15:07.143027 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:34 old-k8s-version-474598 kubelet[731]: E0701 15:09:34.687641     731 pod_workers.go:191] Error syncing pod 91e015d1-1afc-4016-8924-d4032065550c ("busybox_default(91e015d1-1afc-4016-8924-d4032065550c)"), skipping: failed to "StartContainer" for "busybox" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0701 15:15:07.144654 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:34 old-k8s-version-474598 kubelet[731]: E0701 15:09:34.820470     731 pod_workers.go:191] Error syncing pod f29baf52-c4df-4915-b79f-078a24cb4a9f ("kindnet-4k4lt_kube-system(f29baf52-c4df-4915-b79f-078a24cb4a9f)"), skipping: failed to "StartContainer" for "kindnet-cni" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0701 15:15:07.148368 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:34 old-k8s-version-474598 kubelet[731]: E0701 15:09:34.917343     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0701 15:15:07.150584 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:35 old-k8s-version-474598 kubelet[731]: E0701 15:09:35.689857     731 pod_workers.go:191] Error syncing pod f29baf52-c4df-4915-b79f-078a24cb4a9f ("kindnet-4k4lt_kube-system(f29baf52-c4df-4915-b79f-078a24cb4a9f)"), skipping: failed to "StartContainer" for "kindnet-cni" with CreateContainerConfigError: "services have not yet been read at least once, cannot construct envvars"
	W0701 15:15:07.150784 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:35 old-k8s-version-474598 kubelet[731]: E0701 15:09:35.698534     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.153390 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:48 old-k8s-version-474598 kubelet[731]: E0701 15:09:48.631681     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0701 15:15:07.155221 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:59 old-k8s-version-474598 kubelet[731]: E0701 15:09:59.621836     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.155562 3906202 logs.go:138] Found kubelet problem: Jul 01 15:09:59 old-k8s-version-474598 kubelet[731]: E0701 15:09:59.941603     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.156018 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:00 old-k8s-version-474598 kubelet[731]: E0701 15:10:00.943885     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.156346 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:01 old-k8s-version-474598 kubelet[731]: E0701 15:10:01.945146     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.163341 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:13 old-k8s-version-474598 kubelet[731]: E0701 15:10:13.631502     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0701 15:15:07.163967 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:15 old-k8s-version-474598 kubelet[731]: E0701 15:10:15.965751     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.164296 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:20 old-k8s-version-474598 kubelet[731]: E0701 15:10:20.353758     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.164481 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:27 old-k8s-version-474598 kubelet[731]: E0701 15:10:27.621155     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.164806 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:32 old-k8s-version-474598 kubelet[731]: E0701 15:10:32.620846     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.164990 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:42 old-k8s-version-474598 kubelet[731]: E0701 15:10:42.621573     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.165591 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:47 old-k8s-version-474598 kubelet[731]: E0701 15:10:47.017959     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.165918 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:50 old-k8s-version-474598 kubelet[731]: E0701 15:10:50.358024     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.167963 3906202 logs.go:138] Found kubelet problem: Jul 01 15:10:54 old-k8s-version-474598 kubelet[731]: E0701 15:10:54.630563     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0701 15:15:07.168291 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:01 old-k8s-version-474598 kubelet[731]: E0701 15:11:01.621361     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.168475 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:07 old-k8s-version-474598 kubelet[731]: E0701 15:11:07.623694     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.168802 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:16 old-k8s-version-474598 kubelet[731]: E0701 15:11:16.620700     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.168989 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:18 old-k8s-version-474598 kubelet[731]: E0701 15:11:18.621707     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.169598 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:28 old-k8s-version-474598 kubelet[731]: E0701 15:11:28.082520     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.169782 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:29 old-k8s-version-474598 kubelet[731]: E0701 15:11:29.621991     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.170112 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:30 old-k8s-version-474598 kubelet[731]: E0701 15:11:30.353335     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.170300 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:41 old-k8s-version-474598 kubelet[731]: E0701 15:11:41.622209     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.170628 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:42 old-k8s-version-474598 kubelet[731]: E0701 15:11:42.620768     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.170813 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:56 old-k8s-version-474598 kubelet[731]: E0701 15:11:56.621364     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.171140 3906202 logs.go:138] Found kubelet problem: Jul 01 15:11:56 old-k8s-version-474598 kubelet[731]: E0701 15:11:56.622121     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.171323 3906202 logs.go:138] Found kubelet problem: Jul 01 15:12:07 old-k8s-version-474598 kubelet[731]: E0701 15:12:07.622340     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.171650 3906202 logs.go:138] Found kubelet problem: Jul 01 15:12:08 old-k8s-version-474598 kubelet[731]: E0701 15:12:08.621109     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.172491 3906202 logs.go:138] Found kubelet problem: Jul 01 15:12:20 old-k8s-version-474598 kubelet[731]: E0701 15:12:20.620702     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.175122 3906202 logs.go:138] Found kubelet problem: Jul 01 15:12:20 old-k8s-version-474598 kubelet[731]: E0701 15:12:20.631313     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0701 15:15:07.175500 3906202 logs.go:138] Found kubelet problem: Jul 01 15:12:34 old-k8s-version-474598 kubelet[731]: E0701 15:12:34.620973     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.175715 3906202 logs.go:138] Found kubelet problem: Jul 01 15:12:34 old-k8s-version-474598 kubelet[731]: E0701 15:12:34.622005     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.175925 3906202 logs.go:138] Found kubelet problem: Jul 01 15:12:48 old-k8s-version-474598 kubelet[731]: E0701 15:12:48.621512     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.176539 3906202 logs.go:138] Found kubelet problem: Jul 01 15:12:51 old-k8s-version-474598 kubelet[731]: E0701 15:12:51.233147     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.176947 3906202 logs.go:138] Found kubelet problem: Jul 01 15:13:00 old-k8s-version-474598 kubelet[731]: E0701 15:13:00.353139     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.177170 3906202 logs.go:138] Found kubelet problem: Jul 01 15:13:00 old-k8s-version-474598 kubelet[731]: E0701 15:13:00.621289     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.178700 3906202 logs.go:138] Found kubelet problem: Jul 01 15:13:12 old-k8s-version-474598 kubelet[731]: E0701 15:13:12.621008     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.178996 3906202 logs.go:138] Found kubelet problem: Jul 01 15:13:12 old-k8s-version-474598 kubelet[731]: E0701 15:13:12.621572     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.179219 3906202 logs.go:138] Found kubelet problem: Jul 01 15:13:24 old-k8s-version-474598 kubelet[731]: E0701 15:13:24.621230     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.179574 3906202 logs.go:138] Found kubelet problem: Jul 01 15:13:25 old-k8s-version-474598 kubelet[731]: E0701 15:13:25.620763     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.179785 3906202 logs.go:138] Found kubelet problem: Jul 01 15:13:37 old-k8s-version-474598 kubelet[731]: E0701 15:13:37.621574     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.180140 3906202 logs.go:138] Found kubelet problem: Jul 01 15:13:39 old-k8s-version-474598 kubelet[731]: E0701 15:13:39.621765     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.180353 3906202 logs.go:138] Found kubelet problem: Jul 01 15:13:50 old-k8s-version-474598 kubelet[731]: E0701 15:13:50.621437     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.180708 3906202 logs.go:138] Found kubelet problem: Jul 01 15:13:54 old-k8s-version-474598 kubelet[731]: E0701 15:13:54.620755     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.180922 3906202 logs.go:138] Found kubelet problem: Jul 01 15:14:01 old-k8s-version-474598 kubelet[731]: E0701 15:14:01.621745     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.181292 3906202 logs.go:138] Found kubelet problem: Jul 01 15:14:07 old-k8s-version-474598 kubelet[731]: E0701 15:14:07.620754     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.181507 3906202 logs.go:138] Found kubelet problem: Jul 01 15:14:16 old-k8s-version-474598 kubelet[731]: E0701 15:14:16.622062     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.182110 3906202 logs.go:138] Found kubelet problem: Jul 01 15:14:22 old-k8s-version-474598 kubelet[731]: E0701 15:14:22.620813     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.182330 3906202 logs.go:138] Found kubelet problem: Jul 01 15:14:31 old-k8s-version-474598 kubelet[731]: E0701 15:14:31.621391     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.182743 3906202 logs.go:138] Found kubelet problem: Jul 01 15:14:33 old-k8s-version-474598 kubelet[731]: E0701 15:14:33.620802     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.182974 3906202 logs.go:138] Found kubelet problem: Jul 01 15:14:44 old-k8s-version-474598 kubelet[731]: E0701 15:14:44.621361     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.183328 3906202 logs.go:138] Found kubelet problem: Jul 01 15:14:48 old-k8s-version-474598 kubelet[731]: E0701 15:14:48.620740     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.183540 3906202 logs.go:138] Found kubelet problem: Jul 01 15:14:58 old-k8s-version-474598 kubelet[731]: E0701 15:14:58.621427     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.183895 3906202 logs.go:138] Found kubelet problem: Jul 01 15:15:01 old-k8s-version-474598 kubelet[731]: E0701 15:15:01.620856     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	I0701 15:15:07.183925 3906202 logs.go:123] Gathering logs for kube-controller-manager [6f249a20156ffcc8d1b05a5a0133a0476123eab1338f65400a301afe0851c461] ...
	I0701 15:15:07.183955 3906202 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f249a20156ffcc8d1b05a5a0133a0476123eab1338f65400a301afe0851c461"
	I0701 15:15:07.290973 3906202 out.go:304] Setting ErrFile to fd 2...
	I0701 15:15:07.291010 3906202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0701 15:15:07.291072 3906202 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0701 15:15:07.291086 3906202 out.go:239]   Jul 01 15:14:33 old-k8s-version-474598 kubelet[731]: E0701 15:14:33.620802     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	  Jul 01 15:14:33 old-k8s-version-474598 kubelet[731]: E0701 15:14:33.620802     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.291102 3906202 out.go:239]   Jul 01 15:14:44 old-k8s-version-474598 kubelet[731]: E0701 15:14:44.621361     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 01 15:14:44 old-k8s-version-474598 kubelet[731]: E0701 15:14:44.621361     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.291111 3906202 out.go:239]   Jul 01 15:14:48 old-k8s-version-474598 kubelet[731]: E0701 15:14:48.620740     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	  Jul 01 15:14:48 old-k8s-version-474598 kubelet[731]: E0701 15:14:48.620740     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	W0701 15:15:07.291128 3906202 out.go:239]   Jul 01 15:14:58 old-k8s-version-474598 kubelet[731]: E0701 15:14:58.621427     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 01 15:14:58 old-k8s-version-474598 kubelet[731]: E0701 15:14:58.621427     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0701 15:15:07.291136 3906202 out.go:239]   Jul 01 15:15:01 old-k8s-version-474598 kubelet[731]: E0701 15:15:01.620856     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	  Jul 01 15:15:01 old-k8s-version-474598 kubelet[731]: E0701 15:15:01.620856     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	I0701 15:15:07.291145 3906202 out.go:304] Setting ErrFile to fd 2...
	I0701 15:15:07.291152 3906202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 15:15:17.292251 3906202 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0701 15:15:17.303658 3906202 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0701 15:15:17.328014 3906202 out.go:177] 
	W0701 15:15:17.337150 3906202 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0701 15:15:17.337199 3906202 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0701 15:15:17.337218 3906202 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0701 15:15:17.337223 3906202 out.go:239] * 
	* 
	W0701 15:15:17.343698 3906202 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 15:15:17.365344 3906202 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-474598 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-474598
helpers_test.go:235: (dbg) docker inspect old-k8s-version-474598:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "488c1c329e41a6bb8e3ea135435a6e0d150e1847d966df238e8563a0593d59d7",
	        "Created": "2024-07-01T15:05:54.416833834Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3906423,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-01T15:09:02.20288182Z",
	            "FinishedAt": "2024-07-01T15:09:00.506059017Z"
	        },
	        "Image": "sha256:59cf53f54b1bed0b432ebf08c6ac817bec062867b90e25c5452b8e7c3276a7ff",
	        "ResolvConfPath": "/var/lib/docker/containers/488c1c329e41a6bb8e3ea135435a6e0d150e1847d966df238e8563a0593d59d7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/488c1c329e41a6bb8e3ea135435a6e0d150e1847d966df238e8563a0593d59d7/hostname",
	        "HostsPath": "/var/lib/docker/containers/488c1c329e41a6bb8e3ea135435a6e0d150e1847d966df238e8563a0593d59d7/hosts",
	        "LogPath": "/var/lib/docker/containers/488c1c329e41a6bb8e3ea135435a6e0d150e1847d966df238e8563a0593d59d7/488c1c329e41a6bb8e3ea135435a6e0d150e1847d966df238e8563a0593d59d7-json.log",
	        "Name": "/old-k8s-version-474598",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-474598:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-474598",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4932492edb194f292342ed064ba8025da93a6b16caa302563b74636432c145d3-init/diff:/var/lib/docker/overlay2/c3139abb5cf1c83f6f12f6a5f4a9c8df468321ed41d6e455d104ebf4c7d8657d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4932492edb194f292342ed064ba8025da93a6b16caa302563b74636432c145d3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4932492edb194f292342ed064ba8025da93a6b16caa302563b74636432c145d3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4932492edb194f292342ed064ba8025da93a6b16caa302563b74636432c145d3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-474598",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-474598/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-474598",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-474598",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-474598",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "daaf3b39f7ca7c3c74b58e05510be09109fc2c1c045dd6b3d365a80d436931cc",
	            "SandboxKey": "/var/run/docker/netns/daaf3b39f7ca",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34190"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34191"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34194"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34192"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34193"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-474598": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "bdbc927c6f3536ac21b1dc3dc9118a3a9e931ac2a36d769b0f88b8fbb40dcc13",
	                    "EndpointID": "9075df8a8153d26f5fae246d1285a160d011a4abbf57562e467b74aee0a14869",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-474598",
	                        "488c1c329e41"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-474598 -n old-k8s-version-474598
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-474598 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-474598 logs -n 25: (2.328585869s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cilium-637965                                       | cilium-637965            | jenkins | v1.33.1 | 01 Jul 24 15:04 UTC | 01 Jul 24 15:04 UTC |
	| start   | -p cert-expiration-603938                              | cert-expiration-603938   | jenkins | v1.33.1 | 01 Jul 24 15:04 UTC | 01 Jul 24 15:05 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-737034                            | force-systemd-env-737034 | jenkins | v1.33.1 | 01 Jul 24 15:05 UTC | 01 Jul 24 15:05 UTC |
	| start   | -p cert-options-257243                                 | cert-options-257243      | jenkins | v1.33.1 | 01 Jul 24 15:05 UTC | 01 Jul 24 15:05 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	| ssh     | cert-options-257243 ssh                                | cert-options-257243      | jenkins | v1.33.1 | 01 Jul 24 15:05 UTC | 01 Jul 24 15:05 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-257243 -- sudo                         | cert-options-257243      | jenkins | v1.33.1 | 01 Jul 24 15:05 UTC | 01 Jul 24 15:05 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-257243                                 | cert-options-257243      | jenkins | v1.33.1 | 01 Jul 24 15:05 UTC | 01 Jul 24 15:05 UTC |
	| start   | -p old-k8s-version-474598                              | old-k8s-version-474598   | jenkins | v1.33.1 | 01 Jul 24 15:05 UTC | 01 Jul 24 15:08 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-603938                              | cert-expiration-603938   | jenkins | v1.33.1 | 01 Jul 24 15:08 UTC | 01 Jul 24 15:08 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-603938                              | cert-expiration-603938   | jenkins | v1.33.1 | 01 Jul 24 15:08 UTC | 01 Jul 24 15:08 UTC |
	| addons  | enable metrics-server -p old-k8s-version-474598        | old-k8s-version-474598   | jenkins | v1.33.1 | 01 Jul 24 15:08 UTC | 01 Jul 24 15:08 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-474598                              | old-k8s-version-474598   | jenkins | v1.33.1 | 01 Jul 24 15:08 UTC | 01 Jul 24 15:09 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| start   | -p no-preload-969646                                   | no-preload-969646        | jenkins | v1.33.1 | 01 Jul 24 15:08 UTC | 01 Jul 24 15:10 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-474598             | old-k8s-version-474598   | jenkins | v1.33.1 | 01 Jul 24 15:09 UTC | 01 Jul 24 15:09 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-474598                              | old-k8s-version-474598   | jenkins | v1.33.1 | 01 Jul 24 15:09 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-969646             | no-preload-969646        | jenkins | v1.33.1 | 01 Jul 24 15:10 UTC | 01 Jul 24 15:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-969646                                   | no-preload-969646        | jenkins | v1.33.1 | 01 Jul 24 15:10 UTC | 01 Jul 24 15:10 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-969646                  | no-preload-969646        | jenkins | v1.33.1 | 01 Jul 24 15:10 UTC | 01 Jul 24 15:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-969646                                   | no-preload-969646        | jenkins | v1.33.1 | 01 Jul 24 15:10 UTC | 01 Jul 24 15:14 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                          |         |         |                     |                     |
	| image   | no-preload-969646 image list                           | no-preload-969646        | jenkins | v1.33.1 | 01 Jul 24 15:15 UTC | 01 Jul 24 15:15 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-969646                                   | no-preload-969646        | jenkins | v1.33.1 | 01 Jul 24 15:15 UTC | 01 Jul 24 15:15 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-969646                                   | no-preload-969646        | jenkins | v1.33.1 | 01 Jul 24 15:15 UTC | 01 Jul 24 15:15 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-969646                                   | no-preload-969646        | jenkins | v1.33.1 | 01 Jul 24 15:15 UTC | 01 Jul 24 15:15 UTC |
	| delete  | -p no-preload-969646                                   | no-preload-969646        | jenkins | v1.33.1 | 01 Jul 24 15:15 UTC | 01 Jul 24 15:15 UTC |
	| start   | -p embed-certs-207952                                  | embed-certs-207952       | jenkins | v1.33.1 | 01 Jul 24 15:15 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/01 15:15:11
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 15:15:11.829289 3915475 out.go:291] Setting OutFile to fd 1 ...
	I0701 15:15:11.829496 3915475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 15:15:11.829530 3915475 out.go:304] Setting ErrFile to fd 2...
	I0701 15:15:11.829550 3915475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 15:15:11.829843 3915475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-3708336/.minikube/bin
	I0701 15:15:11.830349 3915475 out.go:298] Setting JSON to false
	I0701 15:15:11.831406 3915475 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":169063,"bootTime":1719677849,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0701 15:15:11.831518 3915475 start.go:139] virtualization:  
	I0701 15:15:11.836663 3915475 out.go:177] * [embed-certs-207952] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0701 15:15:11.839670 3915475 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 15:15:11.839742 3915475 notify.go:220] Checking for updates...
	I0701 15:15:11.843894 3915475 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 15:15:11.847069 3915475 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19166-3708336/kubeconfig
	I0701 15:15:11.849194 3915475 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-3708336/.minikube
	I0701 15:15:11.851384 3915475 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0701 15:15:11.853602 3915475 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 15:15:11.856624 3915475 config.go:182] Loaded profile config "old-k8s-version-474598": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0701 15:15:11.856725 3915475 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 15:15:11.891992 3915475 docker.go:122] docker version: linux-27.0.3:Docker Engine - Community
	I0701 15:15:11.892149 3915475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 15:15:11.969774 3915475 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-01 15:15:11.95868505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0701 15:15:11.969880 3915475 docker.go:295] overlay module found
	I0701 15:15:11.972239 3915475 out.go:177] * Using the docker driver based on user configuration
	I0701 15:15:11.973977 3915475 start.go:297] selected driver: docker
	I0701 15:15:11.973995 3915475 start.go:901] validating driver "docker" against <nil>
	I0701 15:15:11.974009 3915475 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 15:15:11.974728 3915475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 15:15:12.030139 3915475 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-01 15:15:12.019209244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0701 15:15:12.030341 3915475 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 15:15:12.030588 3915475 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0701 15:15:12.032906 3915475 out.go:177] * Using Docker driver with root privileges
	I0701 15:15:12.035178 3915475 cni.go:84] Creating CNI manager for ""
	I0701 15:15:12.035210 3915475 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0701 15:15:12.035222 3915475 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0701 15:15:12.035325 3915475 start.go:340] cluster config:
	{Name:embed-certs-207952 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-207952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 15:15:12.037810 3915475 out.go:177] * Starting "embed-certs-207952" primary control-plane node in "embed-certs-207952" cluster
	I0701 15:15:12.039811 3915475 cache.go:121] Beginning downloading kic base image for docker with crio
	I0701 15:15:12.042117 3915475 out.go:177] * Pulling base image v0.0.44-1719413016-19142 ...
	I0701 15:15:12.044242 3915475 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0701 15:15:12.044303 3915475 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4
	I0701 15:15:12.044325 3915475 cache.go:56] Caching tarball of preloaded images
	I0701 15:15:12.044346 3915475 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local docker daemon
	I0701 15:15:12.044427 3915475 preload.go:173] Found /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0701 15:15:12.044437 3915475 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0701 15:15:12.044614 3915475 profile.go:143] Saving config to /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/embed-certs-207952/config.json ...
	I0701 15:15:12.044654 3915475 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/embed-certs-207952/config.json: {Name:mka97fe2965e2135cf7b28239bc9a7c58ffd4b18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0701 15:15:12.062039 3915475 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local docker daemon, skipping pull
	I0701 15:15:12.062070 3915475 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d exists in daemon, skipping load
	I0701 15:15:12.062090 3915475 cache.go:194] Successfully downloaded all kic artifacts
	I0701 15:15:12.062126 3915475 start.go:360] acquireMachinesLock for embed-certs-207952: {Name:mkd776ba3e1a32390baf4d44c3ff28f10b20be29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0701 15:15:12.062252 3915475 start.go:364] duration metric: took 101.827µs to acquireMachinesLock for "embed-certs-207952"
	I0701 15:15:12.062294 3915475 start.go:93] Provisioning new machine with config: &{Name:embed-certs-207952 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-207952 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0701 15:15:12.062378 3915475 start.go:125] createHost starting for "" (driver="docker")
	I0701 15:15:12.066003 3915475 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0701 15:15:12.066327 3915475 start.go:159] libmachine.API.Create for "embed-certs-207952" (driver="docker")
	I0701 15:15:12.066366 3915475 client.go:168] LocalClient.Create starting
	I0701 15:15:12.066435 3915475 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/ca.pem
	I0701 15:15:12.066518 3915475 main.go:141] libmachine: Decoding PEM data...
	I0701 15:15:12.066540 3915475 main.go:141] libmachine: Parsing certificate...
	I0701 15:15:12.066602 3915475 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19166-3708336/.minikube/certs/cert.pem
	I0701 15:15:12.066635 3915475 main.go:141] libmachine: Decoding PEM data...
	I0701 15:15:12.066647 3915475 main.go:141] libmachine: Parsing certificate...
	I0701 15:15:12.067032 3915475 cli_runner.go:164] Run: docker network inspect embed-certs-207952 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0701 15:15:12.087214 3915475 cli_runner.go:211] docker network inspect embed-certs-207952 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0701 15:15:12.087340 3915475 network_create.go:284] running [docker network inspect embed-certs-207952] to gather additional debugging logs...
	I0701 15:15:12.087361 3915475 cli_runner.go:164] Run: docker network inspect embed-certs-207952
	W0701 15:15:12.112388 3915475 cli_runner.go:211] docker network inspect embed-certs-207952 returned with exit code 1
	I0701 15:15:12.112446 3915475 network_create.go:287] error running [docker network inspect embed-certs-207952]: docker network inspect embed-certs-207952: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-207952 not found
	I0701 15:15:12.112463 3915475 network_create.go:289] output of [docker network inspect embed-certs-207952]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-207952 not found
	
	** /stderr **
	I0701 15:15:12.112644 3915475 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0701 15:15:12.130547 3915475 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3cb95f1f57f2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:32:62:e0:6d} reservation:<nil>}
	I0701 15:15:12.131029 3915475 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-28e3ebdfe03a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:97:4d:77:de} reservation:<nil>}
	I0701 15:15:12.131599 3915475 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2eca4f7c13a7 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:3c:53:3b:8f} reservation:<nil>}
	I0701 15:15:12.132012 3915475 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-bdbc927c6f35 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:8d:4b:f6:f4} reservation:<nil>}
	I0701 15:15:12.132790 3915475 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400191add0}
	I0701 15:15:12.132824 3915475 network_create.go:124] attempt to create docker network embed-certs-207952 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0701 15:15:12.132885 3915475 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-207952 embed-certs-207952
	I0701 15:15:12.212230 3915475 network_create.go:108] docker network embed-certs-207952 192.168.85.0/24 created
	I0701 15:15:12.212266 3915475 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-207952" container
	I0701 15:15:12.212357 3915475 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0701 15:15:12.234771 3915475 cli_runner.go:164] Run: docker volume create embed-certs-207952 --label name.minikube.sigs.k8s.io=embed-certs-207952 --label created_by.minikube.sigs.k8s.io=true
	I0701 15:15:12.255121 3915475 oci.go:103] Successfully created a docker volume embed-certs-207952
	I0701 15:15:12.255220 3915475 cli_runner.go:164] Run: docker run --rm --name embed-certs-207952-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-207952 --entrypoint /usr/bin/test -v embed-certs-207952:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d -d /var/lib
	I0701 15:15:12.935274 3915475 oci.go:107] Successfully prepared a docker volume embed-certs-207952
	I0701 15:15:12.935328 3915475 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0701 15:15:12.935349 3915475 kic.go:194] Starting extracting preloaded images to volume ...
	I0701 15:15:12.935432 3915475 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-207952:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d -I lz4 -xf /preloaded.tar -C /extractDir
	I0701 15:15:17.292251 3906202 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0701 15:15:17.303658 3906202 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0701 15:15:17.328014 3906202 out.go:177] 
	W0701 15:15:17.337150 3906202 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0701 15:15:17.337199 3906202 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0701 15:15:17.337218 3906202 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0701 15:15:17.337223 3906202 out.go:239] * 
	W0701 15:15:17.343698 3906202 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0701 15:15:17.365344 3906202 out.go:177] 
	
	
	==> CRI-O <==
	Jul 01 15:13:00 old-k8s-version-474598 crio[618]: time="2024-07-01 15:13:00.620952060Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=2023c74c-3920-467e-a8da-a319abbed76b name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:13:12 old-k8s-version-474598 crio[618]: time="2024-07-01 15:13:12.620928211Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=f655d734-b729-424c-8d86-e5eb9bcd9df6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:13:12 old-k8s-version-474598 crio[618]: time="2024-07-01 15:13:12.621294130Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=f655d734-b729-424c-8d86-e5eb9bcd9df6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:13:24 old-k8s-version-474598 crio[618]: time="2024-07-01 15:13:24.620698350Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=169a1a9d-77ea-4a5e-bd14-d0695d403013 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:13:24 old-k8s-version-474598 crio[618]: time="2024-07-01 15:13:24.620933052Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=169a1a9d-77ea-4a5e-bd14-d0695d403013 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:13:37 old-k8s-version-474598 crio[618]: time="2024-07-01 15:13:37.620766644Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=b6e1233f-5871-453e-90de-737bd4b84110 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:13:37 old-k8s-version-474598 crio[618]: time="2024-07-01 15:13:37.621000804Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=b6e1233f-5871-453e-90de-737bd4b84110 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:13:50 old-k8s-version-474598 crio[618]: time="2024-07-01 15:13:50.620835090Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=4156ea16-87f2-45cb-b51c-4301873c0889 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:13:50 old-k8s-version-474598 crio[618]: time="2024-07-01 15:13:50.621184501Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=4156ea16-87f2-45cb-b51c-4301873c0889 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:14:01 old-k8s-version-474598 crio[618]: time="2024-07-01 15:14:01.621079067Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=f9a092b0-2408-48c8-a195-76bb8e01592e name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:14:01 old-k8s-version-474598 crio[618]: time="2024-07-01 15:14:01.621333199Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=f9a092b0-2408-48c8-a195-76bb8e01592e name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:14:16 old-k8s-version-474598 crio[618]: time="2024-07-01 15:14:16.620912420Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=c845a48b-0c31-47c8-8e13-41c8e3b25d8e name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:14:16 old-k8s-version-474598 crio[618]: time="2024-07-01 15:14:16.621247126Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=c845a48b-0c31-47c8-8e13-41c8e3b25d8e name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:14:19 old-k8s-version-474598 crio[618]: time="2024-07-01 15:14:19.518149299Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=8764fb57-4f6e-425c-b269-9178b4d34c07 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:14:19 old-k8s-version-474598 crio[618]: time="2024-07-01 15:14:19.518390089Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c,RepoTags:[k8s.gcr.io/pause:3.2 registry.k8s.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f registry.k8s.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 registry.k8s.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:489397,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8764fb57-4f6e-425c-b269-9178b4d34c07 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:14:31 old-k8s-version-474598 crio[618]: time="2024-07-01 15:14:31.620786592Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=9d02c36e-9b2d-4063-9026-0a13e0e30206 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:14:31 old-k8s-version-474598 crio[618]: time="2024-07-01 15:14:31.621049347Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=9d02c36e-9b2d-4063-9026-0a13e0e30206 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:14:44 old-k8s-version-474598 crio[618]: time="2024-07-01 15:14:44.620792607Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=1dd8e769-984d-40d8-8764-824e74e75dab name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:14:44 old-k8s-version-474598 crio[618]: time="2024-07-01 15:14:44.621150091Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=1dd8e769-984d-40d8-8764-824e74e75dab name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:14:58 old-k8s-version-474598 crio[618]: time="2024-07-01 15:14:58.620834647Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=e346ed8e-8338-4f0d-8b00-87bb7b43ecd0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:14:58 old-k8s-version-474598 crio[618]: time="2024-07-01 15:14:58.621188504Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=e346ed8e-8338-4f0d-8b00-87bb7b43ecd0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:15:09 old-k8s-version-474598 crio[618]: time="2024-07-01 15:15:09.621156632Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=8944d523-d81a-4a47-aeb1-7fafd3ad4ba2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:15:09 old-k8s-version-474598 crio[618]: time="2024-07-01 15:15:09.621393664Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=8944d523-d81a-4a47-aeb1-7fafd3ad4ba2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 01 15:15:09 old-k8s-version-474598 crio[618]: time="2024-07-01 15:15:09.622088357Z" level=info msg="Pulling image: fake.domain/registry.k8s.io/echoserver:1.4" id=6e7b9895-750c-49c7-bf6a-0fd0ef7930fb name=/runtime.v1alpha2.ImageService/PullImage
	Jul 01 15:15:09 old-k8s-version-474598 crio[618]: time="2024-07-01 15:15:09.627630177Z" level=info msg="Trying to access \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	c5883918438fc       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           2 minutes ago       Exited              dashboard-metrics-scraper   5                   964f4d9677e1f       dashboard-metrics-scraper-8d5bb5db8-ppb9p
	870578f023cca       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   5 minutes ago       Running             kubernetes-dashboard        0                   949bead633303       kubernetes-dashboard-cd95d586-wrff2
	25c8776f1771d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           5 minutes ago       Running             storage-provisioner         0                   4c23d0c907a2a       storage-provisioner
	ec86544b4349a       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           5 minutes ago       Running             busybox                     0                   152ab9f058bf9       busybox
	d6c47f5e5c008       89d73d416b992e8f9602b67b4614d9e7f0655aebb3696e18efec695e0b654c40                                           5 minutes ago       Running             kindnet-cni                 0                   93a6791743c47       kindnet-4k4lt
	585eb048d28ee       db91994f4ee8f894a1e8a6c1a76f615da8fc3c019300a3686291ce6fcbc57895                                           5 minutes ago       Running             coredns                     0                   7c0f56b14f84d       coredns-74ff55c5b-6nqwr
	4f612ce98e504       25a5233254979d0678a2db1d15b76b73dc380d81bc5eed93916ba5638b3cd894                                           5 minutes ago       Running             kube-proxy                  0                   2767ab30c048a       kube-proxy-prspm
	6f249a20156ff       1df8a2b116bd16f7070fd383a6769c8d644b365575e8ffa3e492b84e4f05fc74                                           5 minutes ago       Running             kube-controller-manager     0                   5b1dd556f22be       kube-controller-manager-old-k8s-version-474598
	8937951752f8c       05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28                                           5 minutes ago       Running             etcd                        0                   768c29981e415       etcd-old-k8s-version-474598
	99b47a1789a53       e7605f88f17d6a4c3f083ef9c6f5f19b39f87e4d4406a05a8612b54a6ea57051                                           5 minutes ago       Running             kube-scheduler              0                   cf77d737fcaa7       kube-scheduler-old-k8s-version-474598
	29ff1e584547a       2c08bbbc02d3aa5dfbf4e79f15c0a61424049288917aa10364464ca1f7de7157                                           5 minutes ago       Running             kube-apiserver              0                   b7a013cb4d874       kube-apiserver-old-k8s-version-474598
	
	
	==> coredns [585eb048d28eef3f91142d493ebacd44932dc6beaeb62efc44eef6e21a027d29] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:51952 - 42951 "HINFO IN 9146435228650152897.2704546719472669548. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011730957s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:44443 - 28859 "HINFO IN 2863620352919622520.3109769025305134213. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033939018s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0701 15:10:06.387349       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-01 15:09:36.386675205 +0000 UTC m=+0.121971736) (total time: 30.00056638s):
	Trace[2019727887]: [30.00056638s] [30.00056638s] END
	E0701 15:10:06.387481       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0701 15:10:06.387526       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-01 15:09:36.38725098 +0000 UTC m=+0.122547511) (total time: 30.000256511s):
	Trace[1427131847]: [30.000256511s] [30.000256511s] END
	E0701 15:10:06.387542       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0701 15:10:06.387823       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-01 15:09:36.387588116 +0000 UTC m=+0.122884663) (total time: 30.000220761s):
	Trace[911902081]: [30.000220761s] [30.000220761s] END
	E0701 15:10:06.387831       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-474598
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-474598
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f87dc4b1adfa3faf30393d14b8f7fb9acc5e991c
	                    minikube.k8s.io/name=old-k8s-version-474598
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_01T15_06_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Jul 2024 15:06:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-474598
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Jul 2024 15:15:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Jul 2024 15:10:25 +0000   Mon, 01 Jul 2024 15:06:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Jul 2024 15:10:25 +0000   Mon, 01 Jul 2024 15:06:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Jul 2024 15:10:25 +0000   Mon, 01 Jul 2024 15:06:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Jul 2024 15:10:25 +0000   Mon, 01 Jul 2024 15:07:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-474598
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 b0337712096d4a84b29e48b38b4b0a40
	  System UUID:                0c4fd3b9-24c5-4785-ab57-e69a0a8b8da1
	  Boot ID:                    030faa4f-44aa-434e-978f-182f6d212f48
	  Kernel Version:             5.15.0-1063-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m42s
	  kube-system                 coredns-74ff55c5b-6nqwr                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m22s
	  kube-system                 etcd-old-k8s-version-474598                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m30s
	  kube-system                 kindnet-4k4lt                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m22s
	  kube-system                 kube-apiserver-old-k8s-version-474598             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 kube-controller-manager-old-k8s-version-474598    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 kube-proxy-prspm                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m22s
	  kube-system                 kube-scheduler-old-k8s-version-474598             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 metrics-server-9975d5f86-99tkb                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m31s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-ppb9p         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m28s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-wrff2               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m49s (x5 over 8m49s)  kubelet     Node old-k8s-version-474598 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m49s (x5 over 8m49s)  kubelet     Node old-k8s-version-474598 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m49s (x4 over 8m49s)  kubelet     Node old-k8s-version-474598 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m30s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m30s                  kubelet     Node old-k8s-version-474598 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m30s                  kubelet     Node old-k8s-version-474598 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m30s                  kubelet     Node old-k8s-version-474598 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m20s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                8m10s                  kubelet     Node old-k8s-version-474598 status is now: NodeReady
	  Normal  Starting                 6m                     kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m (x8 over 6m)        kubelet     Node old-k8s-version-474598 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m (x8 over 6m)        kubelet     Node old-k8s-version-474598 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m (x8 over 6m)        kubelet     Node old-k8s-version-474598 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m43s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000739] FS-Cache: N-cookie c=000001f2 [p=000001e9 fl=2 nc=0 na=1]
	[  +0.000919] FS-Cache: N-cookie d=00000000ac0c5ba0{9p.inode} n=000000002bcd9820
	[  +0.001021] FS-Cache: N-key=[8] '7f903b0000000000'
	[  +0.003039] FS-Cache: Duplicate cookie detected
	[  +0.000679] FS-Cache: O-cookie c=000001ec [p=000001e9 fl=226 nc=0 na=1]
	[  +0.000953] FS-Cache: O-cookie d=00000000ac0c5ba0{9p.inode} n=000000004cf8c411
	[  +0.001075] FS-Cache: O-key=[8] '7f903b0000000000'
	[  +0.000699] FS-Cache: N-cookie c=000001f3 [p=000001e9 fl=2 nc=0 na=1]
	[  +0.000920] FS-Cache: N-cookie d=00000000ac0c5ba0{9p.inode} n=000000007108af87
	[  +0.001036] FS-Cache: N-key=[8] '7f903b0000000000'
	[  +2.349943] FS-Cache: Duplicate cookie detected
	[  +0.000692] FS-Cache: O-cookie c=000001ea [p=000001e9 fl=226 nc=0 na=1]
	[  +0.000979] FS-Cache: O-cookie d=00000000ac0c5ba0{9p.inode} n=000000003e41755a
	[  +0.001031] FS-Cache: O-key=[8] '7e903b0000000000'
	[  +0.000727] FS-Cache: N-cookie c=000001f5 [p=000001e9 fl=2 nc=0 na=1]
	[  +0.000922] FS-Cache: N-cookie d=00000000ac0c5ba0{9p.inode} n=0000000049bce9d6
	[  +0.001027] FS-Cache: N-key=[8] '7e903b0000000000'
	[  +0.286123] FS-Cache: Duplicate cookie detected
	[  +0.000698] FS-Cache: O-cookie c=000001ef [p=000001e9 fl=226 nc=0 na=1]
	[  +0.000952] FS-Cache: O-cookie d=00000000ac0c5ba0{9p.inode} n=000000001ef70645
	[  +0.001038] FS-Cache: O-key=[8] '84903b0000000000'
	[  +0.000692] FS-Cache: N-cookie c=000001f6 [p=000001e9 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=00000000ac0c5ba0{9p.inode} n=000000002bcd9820
	[  +0.001037] FS-Cache: N-key=[8] '84903b0000000000'
	[Jul 1 15:03] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [8937951752f8cf91f00237b6ccb23193fd6ae6e0c75a210a7eb01e45df33434f] <==
	2024-07-01 15:11:18.305846 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:11:28.306002 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:11:38.305796 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:11:48.307040 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:11:58.305897 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:12:08.305803 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:12:18.305814 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:12:28.305973 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:12:38.305865 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:12:48.305764 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:12:58.305771 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:13:08.305874 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:13:18.305769 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:13:28.305893 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:13:38.305706 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:13:48.305859 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:13:58.305781 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:14:08.305930 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:14:18.305925 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:14:28.305881 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:14:38.305936 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:14:48.305889 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:14:58.305763 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:15:08.305926 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-01 15:15:18.307112 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 15:15:19 up 1 day, 22:57,  0 users,  load average: 1.58, 1.87, 2.31
	Linux old-k8s-version-474598 5.15.0-1063-aws #69~20.04.1-Ubuntu SMP Fri May 10 19:21:30 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [d6c47f5e5c008f8c4904f4fab278a0c43bd06a808a8fc9f67cc24c2e47316d28] <==
	I0701 15:13:18.576737       1 main.go:227] handling current node
	I0701 15:13:28.584299       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0701 15:13:28.584328       1 main.go:227] handling current node
	I0701 15:13:38.596440       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0701 15:13:38.596466       1 main.go:227] handling current node
	I0701 15:13:48.612265       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0701 15:13:48.612292       1 main.go:227] handling current node
	I0701 15:13:58.627310       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0701 15:13:58.627340       1 main.go:227] handling current node
	I0701 15:14:08.643673       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0701 15:14:08.643699       1 main.go:227] handling current node
	I0701 15:14:18.667857       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0701 15:14:18.667882       1 main.go:227] handling current node
	I0701 15:14:28.673186       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0701 15:14:28.673218       1 main.go:227] handling current node
	I0701 15:14:38.701187       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0701 15:14:38.701294       1 main.go:227] handling current node
	I0701 15:14:48.715497       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0701 15:14:48.715526       1 main.go:227] handling current node
	I0701 15:14:58.727454       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0701 15:14:58.727482       1 main.go:227] handling current node
	I0701 15:15:08.744501       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0701 15:15:08.744530       1 main.go:227] handling current node
	I0701 15:15:18.781097       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0701 15:15:18.781129       1 main.go:227] handling current node
	
	
	==> kube-apiserver [29ff1e584547a3f1954c3b3bc8d86133f9f8821165607c401129fbb1ad25343b] <==
	I0701 15:11:51.827295       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0701 15:11:51.827304       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0701 15:12:36.351119       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 15:12:36.351195       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0701 15:12:36.351202       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0701 15:12:36.710335       1 client.go:360] parsed scheme: "passthrough"
	I0701 15:12:36.710391       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0701 15:12:36.710401       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0701 15:13:16.035790       1 client.go:360] parsed scheme: "passthrough"
	I0701 15:13:16.035835       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0701 15:13:16.035844       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0701 15:13:54.951035       1 client.go:360] parsed scheme: "passthrough"
	I0701 15:13:54.951077       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0701 15:13:54.951086       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0701 15:14:27.676333       1 client.go:360] parsed scheme: "passthrough"
	I0701 15:14:27.676380       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0701 15:14:27.676389       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0701 15:14:33.180051       1 handler_proxy.go:102] no RequestInfo found in the context
	E0701 15:14:33.180144       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0701 15:14:33.180152       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0701 15:15:02.926054       1 client.go:360] parsed scheme: "passthrough"
	I0701 15:15:02.926101       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0701 15:15:02.926110       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [6f249a20156ffcc8d1b05a5a0133a0476123eab1338f65400a301afe0851c461] <==
	W0701 15:10:57.492079       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 15:11:25.184324       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0701 15:11:29.142605       1 request.go:655] Throttling request took 1.036824677s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0701 15:11:29.994051       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 15:11:55.686077       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0701 15:12:01.644523       1 request.go:655] Throttling request took 1.048502614s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
	W0701 15:12:02.497484       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 15:12:26.188186       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0701 15:12:34.147823       1 request.go:655] Throttling request took 1.048341901s, request: GET:https://192.168.76.2:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
	W0701 15:12:34.999272       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 15:12:56.690076       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0701 15:13:06.649658       1 request.go:655] Throttling request took 1.048413098s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0701 15:13:07.502379       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 15:13:27.192561       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0701 15:13:39.153528       1 request.go:655] Throttling request took 1.04835187s, request: GET:https://192.168.76.2:8443/apis/coordination.k8s.io/v1?timeout=32s
	W0701 15:13:40.013288       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 15:13:57.694451       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0701 15:14:11.664705       1 request.go:655] Throttling request took 1.048493432s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
	W0701 15:14:12.516177       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 15:14:28.196216       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0701 15:14:44.166690       1 request.go:655] Throttling request took 1.048299094s, request: GET:https://192.168.76.2:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
	W0701 15:14:45.019046       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0701 15:14:58.698101       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0701 15:15:16.669549       1 request.go:655] Throttling request took 1.047585735s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
	W0701 15:15:17.521162       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [4f612ce98e504e45b0b7d45ab196646d112950fff7af0818ed6b6ae20f451730] <==
	I0701 15:06:59.908164       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0701 15:06:59.908439       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0701 15:06:59.919951       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0701 15:06:59.920037       1 server_others.go:185] Using iptables Proxier.
	I0701 15:06:59.920289       1 server.go:650] Version: v1.20.0
	I0701 15:06:59.921588       1 config.go:224] Starting endpoint slice config controller
	I0701 15:06:59.921702       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0701 15:06:59.922482       1 config.go:315] Starting service config controller
	I0701 15:06:59.922558       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0701 15:07:00.034958       1 shared_informer.go:247] Caches are synced for service config 
	I0701 15:07:00.035018       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0701 15:09:36.629035       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0701 15:09:36.629119       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0701 15:09:36.641586       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0701 15:09:36.641762       1 server_others.go:185] Using iptables Proxier.
	I0701 15:09:36.643101       1 server.go:650] Version: v1.20.0
	I0701 15:09:36.645886       1 config.go:315] Starting service config controller
	I0701 15:09:36.658875       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0701 15:09:36.658962       1 shared_informer.go:247] Caches are synced for service config 
	I0701 15:09:36.652513       1 config.go:224] Starting endpoint slice config controller
	I0701 15:09:36.659095       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0701 15:09:36.659122       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [99b47a1789a53fcc22fad9c608f7e9a89470909c3bed1f74b857b5da84b94f8c] <==
	E0701 15:06:38.093200       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0701 15:06:38.099427       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0701 15:06:38.099687       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0701 15:06:38.099782       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0701 15:06:38.099860       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0701 15:06:38.099933       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0701 15:06:38.100004       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0701 15:06:38.100071       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0701 15:06:38.100150       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0701 15:06:38.100274       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0701 15:06:38.100352       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0701 15:06:38.100417       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0701 15:06:39.024361       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0701 15:06:39.097518       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0701 15:06:39.779607       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0701 15:09:26.569738       1 serving.go:331] Generated self-signed cert in-memory
	W0701 15:09:32.101571       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0701 15:09:32.101717       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0701 15:09:32.101732       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0701 15:09:32.101738       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0701 15:09:32.243752       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0701 15:09:32.249148       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0701 15:09:32.253158       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0701 15:09:32.253343       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0701 15:09:32.390803       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Jul 01 15:13:50 old-k8s-version-474598 kubelet[731]: E0701 15:13:50.621437     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 01 15:13:54 old-k8s-version-474598 kubelet[731]: I0701 15:13:54.620418     731 scope.go:95] [topologymanager] RemoveContainer - Container ID: c5883918438fc46716a63a35a6278fc60f6b19c1de9f1d272d946ab4b8c49aca
	Jul 01 15:13:54 old-k8s-version-474598 kubelet[731]: E0701 15:13:54.620755     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	Jul 01 15:14:01 old-k8s-version-474598 kubelet[731]: E0701 15:14:01.621745     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 01 15:14:07 old-k8s-version-474598 kubelet[731]: I0701 15:14:07.620415     731 scope.go:95] [topologymanager] RemoveContainer - Container ID: c5883918438fc46716a63a35a6278fc60f6b19c1de9f1d272d946ab4b8c49aca
	Jul 01 15:14:07 old-k8s-version-474598 kubelet[731]: E0701 15:14:07.620754     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	Jul 01 15:14:16 old-k8s-version-474598 kubelet[731]: E0701 15:14:16.622062     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 01 15:14:19 old-k8s-version-474598 kubelet[731]: E0701 15:14:19.598995     731 container_manager_linux.go:533] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /docker/488c1c329e41a6bb8e3ea135435a6e0d150e1847d966df238e8563a0593d59d7, memory: /docker/488c1c329e41a6bb8e3ea135435a6e0d150e1847d966df238e8563a0593d59d7/system.slice/kubelet.service
	Jul 01 15:14:22 old-k8s-version-474598 kubelet[731]: I0701 15:14:22.620459     731 scope.go:95] [topologymanager] RemoveContainer - Container ID: c5883918438fc46716a63a35a6278fc60f6b19c1de9f1d272d946ab4b8c49aca
	Jul 01 15:14:22 old-k8s-version-474598 kubelet[731]: E0701 15:14:22.620813     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	Jul 01 15:14:31 old-k8s-version-474598 kubelet[731]: E0701 15:14:31.621391     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 01 15:14:33 old-k8s-version-474598 kubelet[731]: I0701 15:14:33.620477     731 scope.go:95] [topologymanager] RemoveContainer - Container ID: c5883918438fc46716a63a35a6278fc60f6b19c1de9f1d272d946ab4b8c49aca
	Jul 01 15:14:33 old-k8s-version-474598 kubelet[731]: E0701 15:14:33.620802     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	Jul 01 15:14:44 old-k8s-version-474598 kubelet[731]: E0701 15:14:44.621361     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 01 15:14:48 old-k8s-version-474598 kubelet[731]: I0701 15:14:48.620387     731 scope.go:95] [topologymanager] RemoveContainer - Container ID: c5883918438fc46716a63a35a6278fc60f6b19c1de9f1d272d946ab4b8c49aca
	Jul 01 15:14:48 old-k8s-version-474598 kubelet[731]: E0701 15:14:48.620740     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	Jul 01 15:14:58 old-k8s-version-474598 kubelet[731]: E0701 15:14:58.621427     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 01 15:15:01 old-k8s-version-474598 kubelet[731]: I0701 15:15:01.620520     731 scope.go:95] [topologymanager] RemoveContainer - Container ID: c5883918438fc46716a63a35a6278fc60f6b19c1de9f1d272d946ab4b8c49aca
	Jul 01 15:15:01 old-k8s-version-474598 kubelet[731]: E0701 15:15:01.620856     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	Jul 01 15:15:09 old-k8s-version-474598 kubelet[731]: E0701 15:15:09.632533     731 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 01 15:15:09 old-k8s-version-474598 kubelet[731]: E0701 15:15:09.632589     731 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 01 15:15:09 old-k8s-version-474598 kubelet[731]: E0701 15:15:09.632785     731 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-tnwnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-99tkb_kube-system(7e29841
1-ec18-491b-b64f-48a78925d9cf): ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 01 15:15:09 old-k8s-version-474598 kubelet[731]: E0701 15:15:09.632816     731 pod_workers.go:191] Error syncing pod 7e298411-ec18-491b-b64f-48a78925d9cf ("metrics-server-9975d5f86-99tkb_kube-system(7e298411-ec18-491b-b64f-48a78925d9cf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jul 01 15:15:12 old-k8s-version-474598 kubelet[731]: I0701 15:15:12.621005     731 scope.go:95] [topologymanager] RemoveContainer - Container ID: c5883918438fc46716a63a35a6278fc60f6b19c1de9f1d272d946ab4b8c49aca
	Jul 01 15:15:12 old-k8s-version-474598 kubelet[731]: E0701 15:15:12.621391     731 pod_workers.go:191] Error syncing pod c0d271ca-8841-4811-a61c-55f3d52669c4 ("dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppb9p_kubernetes-dashboard(c0d271ca-8841-4811-a61c-55f3d52669c4)"
	
	
	==> kubernetes-dashboard [870578f023cca426ce7d3f51bb2af8cb79612ae25002a652dcbcb30bc1690ed1] <==
	2024/07/01 15:10:04 Starting overwatch
	2024/07/01 15:10:04 Using namespace: kubernetes-dashboard
	2024/07/01 15:10:04 Using in-cluster config to connect to apiserver
	2024/07/01 15:10:04 Using secret token for csrf signing
	2024/07/01 15:10:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/07/01 15:10:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/07/01 15:10:04 Successful initial request to the apiserver, version: v1.20.0
	2024/07/01 15:10:04 Generating JWE encryption key
	2024/07/01 15:10:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/07/01 15:10:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/07/01 15:10:04 Initializing JWE encryption key from synchronized object
	2024/07/01 15:10:04 Creating in-cluster Sidecar client
	2024/07/01 15:10:04 Serving insecurely on HTTP port: 9090
	2024/07/01 15:10:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/01 15:10:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/01 15:11:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/01 15:11:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/01 15:12:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/01 15:12:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/01 15:13:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/01 15:13:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/01 15:14:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/01 15:14:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/01 15:15:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [25c8776f1771df2532ca9cb51b3c40a3778154fec4f38ce0727c6e4b29adc787] <==
	I0701 15:07:14.755208       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0701 15:07:14.776230       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0701 15:07:14.778421       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0701 15:07:14.821724       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0701 15:07:14.821979       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-474598_d23fcb95-97a6-47e8-a648-21f64f216e8e!
	I0701 15:07:14.822676       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1eb87cdf-19d1-44a8-b26a-bf206c9de2f7", APIVersion:"v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-474598_d23fcb95-97a6-47e8-a648-21f64f216e8e became leader
	I0701 15:07:14.922942       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-474598_d23fcb95-97a6-47e8-a648-21f64f216e8e!
	I0701 15:09:49.795184       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0701 15:09:49.816447       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0701 15:09:49.816645       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0701 15:10:07.321364       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0701 15:10:07.321525       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-474598_5559e759-e0c4-46f4-a073-f252aad476fe!
	I0701 15:10:07.322535       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1eb87cdf-19d1-44a8-b26a-bf206c9de2f7", APIVersion:"v1", ResourceVersion:"827", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-474598_5559e759-e0c4-46f4-a073-f252aad476fe became leader
	I0701 15:10:07.422638       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-474598_5559e759-e0c4-46f4-a073-f252aad476fe!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-474598 -n old-k8s-version-474598
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-474598 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-99tkb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-474598 describe pod metrics-server-9975d5f86-99tkb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-474598 describe pod metrics-server-9975d5f86-99tkb: exit status 1 (97.034752ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-99tkb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-474598 describe pod metrics-server-9975d5f86-99tkb: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (379.23s)

                                                
                                    

Test pass (294/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.81
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.30.2/json-events 5.76
13 TestDownloadOnly/v1.30.2/preload-exists 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.07
18 TestDownloadOnly/v1.30.2/DeleteAll 0.2
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.54
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 214.72
29 TestAddons/parallel/Registry 16.12
31 TestAddons/parallel/InspektorGadget 11.77
35 TestAddons/parallel/CSI 51.46
36 TestAddons/parallel/Headlamp 11.98
37 TestAddons/parallel/CloudSpanner 5.57
38 TestAddons/parallel/LocalPath 51.38
39 TestAddons/parallel/NvidiaDevicePlugin 6.55
40 TestAddons/parallel/Yakd 5.01
44 TestAddons/serial/GCPAuth/Namespaces 0.18
45 TestAddons/StoppedEnableDisable 12.22
46 TestCertOptions 36.85
47 TestCertExpiration 249.04
49 TestForceSystemdFlag 44.36
50 TestForceSystemdEnv 42.12
56 TestErrorSpam/setup 31.09
57 TestErrorSpam/start 0.68
58 TestErrorSpam/status 0.98
59 TestErrorSpam/pause 1.63
60 TestErrorSpam/unpause 1.73
61 TestErrorSpam/stop 1.44
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 47.65
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 28.47
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.11
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.87
73 TestFunctional/serial/CacheCmd/cache/add_local 1.01
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.1
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 40
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.64
84 TestFunctional/serial/LogsFileCmd 1.77
85 TestFunctional/serial/InvalidService 4.68
87 TestFunctional/parallel/ConfigCmd 0.44
88 TestFunctional/parallel/DashboardCmd 12.4
89 TestFunctional/parallel/DryRun 0.41
90 TestFunctional/parallel/InternationalLanguage 0.18
91 TestFunctional/parallel/StatusCmd 1.16
95 TestFunctional/parallel/ServiceCmdConnect 10.57
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 25.19
99 TestFunctional/parallel/SSHCmd 0.72
100 TestFunctional/parallel/CpCmd 1.96
102 TestFunctional/parallel/FileSync 0.36
103 TestFunctional/parallel/CertSync 2.14
107 TestFunctional/parallel/NodeLabels 0.09
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.68
111 TestFunctional/parallel/License 0.25
112 TestFunctional/parallel/Version/short 0.08
113 TestFunctional/parallel/Version/components 1.48
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.43
119 TestFunctional/parallel/ImageCommands/Setup 1.67
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.04
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
125 TestFunctional/parallel/ProfileCmd/profile_list 0.48
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.84
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.39
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.35
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.5
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.87
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.28
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.92
144 TestFunctional/parallel/ServiceCmd/DeployApp 6.21
145 TestFunctional/parallel/ServiceCmd/List 0.56
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
148 TestFunctional/parallel/ServiceCmd/Format 0.37
149 TestFunctional/parallel/ServiceCmd/URL 0.38
150 TestFunctional/parallel/MountCmd/any-port 7.38
151 TestFunctional/parallel/MountCmd/specific-port 2.44
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.93
153 TestFunctional/delete_addon-resizer_images 0.08
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 165.13
160 TestMultiControlPlane/serial/DeployApp 7.34
161 TestMultiControlPlane/serial/PingHostFromPods 1.61
162 TestMultiControlPlane/serial/AddWorkerNode 56.24
163 TestMultiControlPlane/serial/NodeLabels 0.11
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.73
165 TestMultiControlPlane/serial/CopyFile 18.95
166 TestMultiControlPlane/serial/StopSecondaryNode 12.75
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.55
168 TestMultiControlPlane/serial/RestartSecondaryNode 34.23
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.49
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 208.83
171 TestMultiControlPlane/serial/DeleteSecondaryNode 12.24
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.52
173 TestMultiControlPlane/serial/StopCluster 35.73
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.57
176 TestMultiControlPlane/serial/AddSecondaryNode 64.1
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.81
181 TestJSONOutput/start/Command 76.25
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.73
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.64
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.92
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.21
206 TestKicCustomNetwork/create_custom_network 43.64
207 TestKicCustomNetwork/use_default_bridge_network 38.72
208 TestKicExistingNetwork 33.69
209 TestKicCustomSubnet 34.83
210 TestKicStaticIP 36.11
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 63.4
215 TestMountStart/serial/StartWithMountFirst 6.91
216 TestMountStart/serial/VerifyMountFirst 0.25
217 TestMountStart/serial/StartWithMountSecond 6.49
218 TestMountStart/serial/VerifyMountSecond 0.27
219 TestMountStart/serial/DeleteFirst 1.63
220 TestMountStart/serial/VerifyMountPostDelete 0.25
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 8.57
223 TestMountStart/serial/VerifyMountPostStop 0.25
226 TestMultiNode/serial/FreshStart2Nodes 125.53
227 TestMultiNode/serial/DeployApp2Nodes 5.99
228 TestMultiNode/serial/PingHostFrom2Pods 0.95
229 TestMultiNode/serial/AddNode 46.67
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.33
232 TestMultiNode/serial/CopyFile 9.96
233 TestMultiNode/serial/StopNode 2.23
234 TestMultiNode/serial/StartAfterStop 9.84
235 TestMultiNode/serial/RestartKeepsNodes 81.92
236 TestMultiNode/serial/DeleteNode 5.3
237 TestMultiNode/serial/StopMultiNode 23.82
238 TestMultiNode/serial/RestartMultiNode 55.44
239 TestMultiNode/serial/ValidateNameConflict 35.28
244 TestPreload 113.81
246 TestScheduledStopUnix 103.05
249 TestInsufficientStorage 10.48
250 TestRunningBinaryUpgrade 81.31
252 TestKubernetesUpgrade 137.41
253 TestMissingContainerUpgrade 151.84
255 TestPause/serial/Start 89.88
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
258 TestNoKubernetes/serial/StartWithK8s 43.6
259 TestNoKubernetes/serial/StartWithStopK8s 6.88
260 TestNoKubernetes/serial/Start 7.66
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
262 TestNoKubernetes/serial/ProfileList 0.99
263 TestNoKubernetes/serial/Stop 1.21
264 TestNoKubernetes/serial/StartNoArgs 7.82
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
266 TestPause/serial/SecondStartNoReconfiguration 29.84
267 TestPause/serial/Pause 1.2
268 TestPause/serial/VerifyStatus 0.38
269 TestPause/serial/Unpause 0.94
270 TestPause/serial/PauseAgain 1.28
271 TestPause/serial/DeletePaused 2.97
272 TestPause/serial/VerifyDeletedResources 0.5
273 TestStoppedBinaryUpgrade/Setup 1.13
274 TestStoppedBinaryUpgrade/Upgrade 77.56
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.43
290 TestNetworkPlugins/group/false 5.16
295 TestStartStop/group/old-k8s-version/serial/FirstStart 169.21
296 TestStartStop/group/old-k8s-version/serial/DeployApp 9.91
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.55
298 TestStartStop/group/old-k8s-version/serial/Stop 13.07
300 TestStartStop/group/no-preload/serial/FirstStart 71.69
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.28
303 TestStartStop/group/no-preload/serial/DeployApp 8.44
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.17
305 TestStartStop/group/no-preload/serial/Stop 12.03
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
307 TestStartStop/group/no-preload/serial/SecondStart 271.68
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
311 TestStartStop/group/no-preload/serial/Pause 3.65
313 TestStartStop/group/embed-certs/serial/FirstStart 85.28
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
317 TestStartStop/group/old-k8s-version/serial/Pause 3.59
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.28
320 TestStartStop/group/embed-certs/serial/DeployApp 9.31
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.14
322 TestStartStop/group/embed-certs/serial/Stop 12.02
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
324 TestStartStop/group/embed-certs/serial/SecondStart 267.8
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.51
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.7
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.26
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 303.97
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
333 TestStartStop/group/embed-certs/serial/Pause 3.2
335 TestStartStop/group/newest-cni/serial/FirstStart 45.53
336 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
337 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.08
339 TestStartStop/group/newest-cni/serial/Stop 1.24
340 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
341 TestStartStop/group/newest-cni/serial/SecondStart 24.29
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
343 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
344 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.87
345 TestNetworkPlugins/group/auto/Start 88.7
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
349 TestStartStop/group/newest-cni/serial/Pause 3.01
350 TestNetworkPlugins/group/kindnet/Start 82.5
351 TestNetworkPlugins/group/auto/KubeletFlags 0.3
352 TestNetworkPlugins/group/auto/NetCatPod 11.28
353 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
354 TestNetworkPlugins/group/auto/DNS 0.2
355 TestNetworkPlugins/group/auto/Localhost 0.15
356 TestNetworkPlugins/group/auto/HairPin 0.15
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
358 TestNetworkPlugins/group/kindnet/NetCatPod 12.23
359 TestNetworkPlugins/group/kindnet/DNS 0.23
360 TestNetworkPlugins/group/kindnet/Localhost 0.19
361 TestNetworkPlugins/group/kindnet/HairPin 0.21
362 TestNetworkPlugins/group/calico/Start 75.41
363 TestNetworkPlugins/group/custom-flannel/Start 71.67
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.3
366 TestNetworkPlugins/group/calico/NetCatPod 11.28
367 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
368 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.32
369 TestNetworkPlugins/group/calico/DNS 0.3
370 TestNetworkPlugins/group/calico/Localhost 0.21
371 TestNetworkPlugins/group/calico/HairPin 0.27
372 TestNetworkPlugins/group/custom-flannel/DNS 0.25
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
375 TestNetworkPlugins/group/enable-default-cni/Start 56.41
376 TestNetworkPlugins/group/flannel/Start 75.08
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.32
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.31
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.51
384 TestNetworkPlugins/group/bridge/Start 88.88
385 TestNetworkPlugins/group/flannel/NetCatPod 12.43
386 TestNetworkPlugins/group/flannel/DNS 0.3
387 TestNetworkPlugins/group/flannel/Localhost 0.22
388 TestNetworkPlugins/group/flannel/HairPin 0.18
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
390 TestNetworkPlugins/group/bridge/NetCatPod 10.25
391 TestNetworkPlugins/group/bridge/DNS 0.21
392 TestNetworkPlugins/group/bridge/Localhost 0.18
393 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (6.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-281343 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-281343 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.80475251s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-281343
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-281343: exit status 85 (69.888569ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-281343 | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC |          |
	|         | -p download-only-281343        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/01 14:15:13
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 14:15:13.246152 3713730 out.go:291] Setting OutFile to fd 1 ...
	I0701 14:15:13.246289 3713730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:15:13.246300 3713730 out.go:304] Setting ErrFile to fd 2...
	I0701 14:15:13.246305 3713730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:15:13.246551 3713730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-3708336/.minikube/bin
	W0701 14:15:13.246684 3713730 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19166-3708336/.minikube/config/config.json: open /home/jenkins/minikube-integration/19166-3708336/.minikube/config/config.json: no such file or directory
	I0701 14:15:13.247105 3713730 out.go:298] Setting JSON to true
	I0701 14:15:13.247989 3713730 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":165465,"bootTime":1719677849,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0701 14:15:13.248066 3713730 start.go:139] virtualization:  
	I0701 14:15:13.250783 3713730 out.go:97] [download-only-281343] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0701 14:15:13.250943 3713730 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball: no such file or directory
	I0701 14:15:13.251007 3713730 notify.go:220] Checking for updates...
	I0701 14:15:13.252698 3713730 out.go:169] MINIKUBE_LOCATION=19166
	I0701 14:15:13.254564 3713730 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 14:15:13.256317 3713730 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19166-3708336/kubeconfig
	I0701 14:15:13.258276 3713730 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-3708336/.minikube
	I0701 14:15:13.260359 3713730 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0701 14:15:13.263629 3713730 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0701 14:15:13.263946 3713730 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 14:15:13.290823 3713730 docker.go:122] docker version: linux-27.0.3:Docker Engine - Community
	I0701 14:15:13.290944 3713730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 14:15:13.347857 3713730 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-01 14:15:13.338065904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0701 14:15:13.347965 3713730 docker.go:295] overlay module found
	I0701 14:15:13.349875 3713730 out.go:97] Using the docker driver based on user configuration
	I0701 14:15:13.349905 3713730 start.go:297] selected driver: docker
	I0701 14:15:13.349913 3713730 start.go:901] validating driver "docker" against <nil>
	I0701 14:15:13.350020 3713730 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 14:15:13.410684 3713730 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-01 14:15:13.401325515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0701 14:15:13.410874 3713730 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 14:15:13.411167 3713730 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0701 14:15:13.411317 3713730 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0701 14:15:13.413442 3713730 out.go:169] Using Docker driver with root privileges
	I0701 14:15:13.415041 3713730 cni.go:84] Creating CNI manager for ""
	I0701 14:15:13.415059 3713730 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0701 14:15:13.415076 3713730 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0701 14:15:13.415146 3713730 start.go:340] cluster config:
	{Name:download-only-281343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-281343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 14:15:13.416799 3713730 out.go:97] Starting "download-only-281343" primary control-plane node in "download-only-281343" cluster
	I0701 14:15:13.416820 3713730 cache.go:121] Beginning downloading kic base image for docker with crio
	I0701 14:15:13.418488 3713730 out.go:97] Pulling base image v0.0.44-1719413016-19142 ...
	I0701 14:15:13.418515 3713730 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0701 14:15:13.418662 3713730 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local docker daemon
	I0701 14:15:13.432901 3713730 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d to local cache
	I0701 14:15:13.433131 3713730 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local cache directory
	I0701 14:15:13.433231 3713730 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d to local cache
	I0701 14:15:13.483880 3713730 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0701 14:15:13.483920 3713730 cache.go:56] Caching tarball of preloaded images
	I0701 14:15:13.484168 3713730 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0701 14:15:13.486292 3713730 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0701 14:15:13.486319 3713730 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0701 14:15:13.578586 3713730 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0701 14:15:17.076165 3713730 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d as a tarball
	
	
	* The control-plane node download-only-281343 host does not exist
	  To start a cluster, run: "minikube start -p download-only-281343"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-281343
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (5.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-789626 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-789626 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.761700297s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (5.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-789626
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-789626: exit status 85 (69.132205ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-281343 | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC |                     |
	|         | -p download-only-281343        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC | 01 Jul 24 14:15 UTC |
	| delete  | -p download-only-281343        | download-only-281343 | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC | 01 Jul 24 14:15 UTC |
	| start   | -o=json --download-only        | download-only-789626 | jenkins | v1.33.1 | 01 Jul 24 14:15 UTC |                     |
	|         | -p download-only-789626        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/01 14:15:20
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.22.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0701 14:15:20.484053 3713934 out.go:291] Setting OutFile to fd 1 ...
	I0701 14:15:20.484189 3713934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:15:20.484200 3713934 out.go:304] Setting ErrFile to fd 2...
	I0701 14:15:20.484205 3713934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:15:20.484532 3713934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-3708336/.minikube/bin
	I0701 14:15:20.485527 3713934 out.go:298] Setting JSON to true
	I0701 14:15:20.486450 3713934 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":165472,"bootTime":1719677849,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0701 14:15:20.486519 3713934 start.go:139] virtualization:  
	I0701 14:15:20.488782 3713934 out.go:97] [download-only-789626] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0701 14:15:20.488973 3713934 notify.go:220] Checking for updates...
	I0701 14:15:20.490613 3713934 out.go:169] MINIKUBE_LOCATION=19166
	I0701 14:15:20.492471 3713934 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 14:15:20.494254 3713934 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19166-3708336/kubeconfig
	I0701 14:15:20.495848 3713934 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-3708336/.minikube
	I0701 14:15:20.497629 3713934 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0701 14:15:20.500951 3713934 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0701 14:15:20.501379 3713934 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 14:15:20.522639 3713934 docker.go:122] docker version: linux-27.0.3:Docker Engine - Community
	I0701 14:15:20.522748 3713934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 14:15:20.586254 3713934 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-01 14:15:20.576316147 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0701 14:15:20.586367 3713934 docker.go:295] overlay module found
	I0701 14:15:20.588248 3713934 out.go:97] Using the docker driver based on user configuration
	I0701 14:15:20.588273 3713934 start.go:297] selected driver: docker
	I0701 14:15:20.588287 3713934 start.go:901] validating driver "docker" against <nil>
	I0701 14:15:20.588401 3713934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 14:15:20.644948 3713934 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-01 14:15:20.630284537 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0701 14:15:20.645132 3713934 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0701 14:15:20.645435 3713934 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0701 14:15:20.645641 3713934 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0701 14:15:20.647558 3713934 out.go:169] Using Docker driver with root privileges
	I0701 14:15:20.649085 3713934 cni.go:84] Creating CNI manager for ""
	I0701 14:15:20.649102 3713934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0701 14:15:20.649114 3713934 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0701 14:15:20.649194 3713934 start.go:340] cluster config:
	{Name:download-only-789626 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-789626 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 14:15:20.650986 3713934 out.go:97] Starting "download-only-789626" primary control-plane node in "download-only-789626" cluster
	I0701 14:15:20.651004 3713934 cache.go:121] Beginning downloading kic base image for docker with crio
	I0701 14:15:20.652834 3713934 out.go:97] Pulling base image v0.0.44-1719413016-19142 ...
	I0701 14:15:20.652858 3713934 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0701 14:15:20.652915 3713934 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local docker daemon
	I0701 14:15:20.667949 3713934 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d to local cache
	I0701 14:15:20.668074 3713934 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local cache directory
	I0701 14:15:20.668094 3713934 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d in local cache directory, skipping pull
	I0701 14:15:20.668098 3713934 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d exists in cache, skipping pull
	I0701 14:15:20.668106 3713934 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d as a tarball
	I0701 14:15:20.732656 3713934 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4
	I0701 14:15:20.732696 3713934 cache.go:56] Caching tarball of preloaded images
	I0701 14:15:20.732866 3713934 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0701 14:15:20.734943 3713934 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0701 14:15:20.734981 3713934 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4 ...
	I0701 14:15:20.820247 3713934 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:e4bf0ba8584d1a2d67dbb103edb83dd1 -> /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4
	I0701 14:15:24.697710 3713934 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4 ...
	I0701 14:15:24.697928 3713934 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/19166-3708336/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-789626 host does not exist
	  To start a cluster, run: "minikube start -p download-only-789626"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-789626
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-147566 --alsologtostderr --binary-mirror http://127.0.0.1:42755 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-147566" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-147566
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-929335
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-929335: exit status 85 (88.489359ms)

                                                
                                                
-- stdout --
	* Profile "addons-929335" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-929335"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-929335
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-929335: exit status 85 (67.709118ms)

                                                
                                                
-- stdout --
	* Profile "addons-929335" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-929335"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (214.72s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-929335 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-929335 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m34.724292739s)
--- PASS: TestAddons/Setup (214.72s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 63.614371ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-bnzqk" [710fb3bb-d2cb-4fb1-a706-25569704842a] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005054672s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-cwtgh" [d522d504-68de-46ed-a686-4cb3f3054752] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004816998s
addons_test.go:342: (dbg) Run:  kubectl --context addons-929335 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-929335 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-929335 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.136932377s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-929335 ip
2024/07/01 14:19:18 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-929335 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8j46m" [af7151fd-575f-412e-84ee-483ab9498590] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004615524s
addons_test.go:843: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-929335
addons_test.go:843: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-929335: (5.76505609s)
--- PASS: TestAddons/parallel/InspektorGadget (11.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 9.619175ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-929335 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-929335 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2417282a-c31c-447d-8ead-c1dc554a7467] Pending
helpers_test.go:344: "task-pv-pod" [2417282a-c31c-447d-8ead-c1dc554a7467] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2417282a-c31c-447d-8ead-c1dc554a7467] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003265446s
addons_test.go:586: (dbg) Run:  kubectl --context addons-929335 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-929335 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-929335 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-929335 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-929335 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-929335 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-929335 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e2cbd0fe-0ccd-4453-a818-e0e60dadf848] Pending
helpers_test.go:344: "task-pv-pod-restore" [e2cbd0fe-0ccd-4453-a818-e0e60dadf848] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e2cbd0fe-0ccd-4453-a818-e0e60dadf848] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003645914s
addons_test.go:628: (dbg) Run:  kubectl --context addons-929335 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-929335 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-929335 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-arm64 -p addons-929335 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-arm64 -p addons-929335 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.758339736s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-929335 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (51.46s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-929335 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-jbhwq" [07ca4ffd-6890-4524-b76d-ac6cb6058cf6] Pending
helpers_test.go:344: "headlamp-7867546754-jbhwq" [07ca4ffd-6890-4524-b76d-ac6cb6058cf6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-jbhwq" [07ca4ffd-6890-4524-b76d-ac6cb6058cf6] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.00420081s
--- PASS: TestAddons/parallel/Headlamp (11.98s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-wm7c2" [7b299e5a-f0ee-422c-bff3-c896eb0d5feb] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003745472s
addons_test.go:862: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-929335
--- PASS: TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.38s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-929335 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-929335 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-929335 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5f5941c1-10ff-401e-a0f5-a798789af3e3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5f5941c1-10ff-401e-a0f5-a798789af3e3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5f5941c1-10ff-401e-a0f5-a798789af3e3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004749672s
addons_test.go:992: (dbg) Run:  kubectl --context addons-929335 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-arm64 -p addons-929335 ssh "cat /opt/local-path-provisioner/pvc-c612bd66-1de9-4129-954e-9710bab6cabd_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-929335 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-929335 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-arm64 -p addons-929335 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-arm64 -p addons-929335 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.234666377s)
--- PASS: TestAddons/parallel/LocalPath (51.38s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ssxlb" [07a73834-f2a1-49e5-ae9a-e15bee08c8ab] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005439686s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-929335
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-k9fkr" [c776ae16-2f0a-4fa0-8823-c3fb5cd2e902] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005602628s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-929335 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-929335 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-929335
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-929335: (11.945776709s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-929335
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-929335
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-929335
--- PASS: TestAddons/StoppedEnableDisable (12.22s)

                                                
                                    
x
+
TestCertOptions (36.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-257243 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-257243 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.122405122s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-257243 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-257243 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-257243 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-257243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-257243
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-257243: (2.016981897s)
--- PASS: TestCertOptions (36.85s)

                                                
                                    
x
+
TestCertExpiration (249.04s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-603938 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-603938 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.18452551s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-603938 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-603938 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (26.188345803s)
helpers_test.go:175: Cleaning up "cert-expiration-603938" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-603938
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-603938: (2.667958258s)
--- PASS: TestCertExpiration (249.04s)

                                                
                                    
x
+
TestForceSystemdFlag (44.36s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-998986 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-998986 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.543182116s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-998986 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-998986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-998986
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-998986: (2.453681522s)
--- PASS: TestForceSystemdFlag (44.36s)

                                                
                                    
x
+
TestForceSystemdEnv (42.12s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-737034 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-737034 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.329582916s)
helpers_test.go:175: Cleaning up "force-systemd-env-737034" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-737034
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-737034: (2.786703315s)
--- PASS: TestForceSystemdEnv (42.12s)

                                                
                                    
x
+
TestErrorSpam/setup (31.09s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-838820 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-838820 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-838820 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-838820 --driver=docker  --container-runtime=crio: (31.093965412s)
--- PASS: TestErrorSpam/setup (31.09s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-838820 --log_dir /tmp/nospam-838820 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-838820 --log_dir /tmp/nospam-838820 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-838820 --log_dir /tmp/nospam-838820 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.98s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-838820 --log_dir /tmp/nospam-838820 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-838820 --log_dir /tmp/nospam-838820 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-838820 --log_dir /tmp/nospam-838820 status
--- PASS: TestErrorSpam/status (0.98s)

                                                
                                    
x
+
TestErrorSpam/pause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-838820 --log_dir /tmp/nospam-838820 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-838820 --log_dir /tmp/nospam-838820 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-838820 --log_dir /tmp/nospam-838820 pause
--- PASS: TestErrorSpam/pause (1.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-838820 --log_dir /tmp/nospam-838820 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-838820 --log_dir /tmp/nospam-838820 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-838820 --log_dir /tmp/nospam-838820 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-838820 --log_dir /tmp/nospam-838820 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-838820 --log_dir /tmp/nospam-838820 stop: (1.242311502s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-838820 --log_dir /tmp/nospam-838820 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-838820 --log_dir /tmp/nospam-838820 stop
--- PASS: TestErrorSpam/stop (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19166-3708336/.minikube/files/etc/test/nested/copy/3713725/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-373457 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-373457 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (47.648719723s)
--- PASS: TestFunctional/serial/StartWithProxy (47.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.47s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-373457 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-373457 --alsologtostderr -v=8: (28.473898285s)
functional_test.go:659: soft start took 28.474392395s for "functional-373457" cluster.
--- PASS: TestFunctional/serial/SoftStart (28.47s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-373457 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-373457 cache add registry.k8s.io/pause:3.1: (1.367483513s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-373457 cache add registry.k8s.io/pause:3.3: (1.324051966s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-373457 cache add registry.k8s.io/pause:latest: (1.178759731s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-373457 /tmp/TestFunctionalserialCacheCmdcacheadd_local3508055292/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 cache add minikube-local-cache-test:functional-373457
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 cache delete minikube-local-cache-test:functional-373457
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-373457
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373457 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (304.745262ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-373457 cache reload: (1.163235458s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 kubectl -- --context functional-373457 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-373457 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-373457 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-373457 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.004143214s)
functional_test.go:757: restart took 40.004251908s for "functional-373457" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.00s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-373457 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-373457 logs: (1.643984068s)
--- PASS: TestFunctional/serial/LogsCmd (1.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 logs --file /tmp/TestFunctionalserialLogsFileCmd3126826756/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-373457 logs --file /tmp/TestFunctionalserialLogsFileCmd3126826756/001/logs.txt: (1.769160249s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.68s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-373457 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-373457
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-373457: exit status 115 (587.311315ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32354 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-373457 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.68s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373457 config get cpus: exit status 14 (75.144727ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373457 config get cpus: exit status 14 (64.578007ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-373457 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-373457 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3742651: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.40s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-373457 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-373457 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (177.464446ms)

                                                
                                                
-- stdout --
	* [functional-373457] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19166-3708336/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-3708336/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 14:28:51.450663 3741729 out.go:291] Setting OutFile to fd 1 ...
	I0701 14:28:51.450872 3741729 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:28:51.450884 3741729 out.go:304] Setting ErrFile to fd 2...
	I0701 14:28:51.450890 3741729 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:28:51.451262 3741729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-3708336/.minikube/bin
	I0701 14:28:51.451834 3741729 out.go:298] Setting JSON to false
	I0701 14:28:51.453105 3741729 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":166283,"bootTime":1719677849,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0701 14:28:51.453181 3741729 start.go:139] virtualization:  
	I0701 14:28:51.457054 3741729 out.go:177] * [functional-373457] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0701 14:28:51.459184 3741729 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 14:28:51.459305 3741729 notify.go:220] Checking for updates...
	I0701 14:28:51.463179 3741729 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 14:28:51.465169 3741729 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19166-3708336/kubeconfig
	I0701 14:28:51.467381 3741729 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-3708336/.minikube
	I0701 14:28:51.469256 3741729 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0701 14:28:51.471559 3741729 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 14:28:51.474032 3741729 config.go:182] Loaded profile config "functional-373457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:28:51.474632 3741729 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 14:28:51.511087 3741729 docker.go:122] docker version: linux-27.0.3:Docker Engine - Community
	I0701 14:28:51.511207 3741729 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 14:28:51.566753 3741729 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-01 14:28:51.556832303 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0701 14:28:51.566860 3741729 docker.go:295] overlay module found
	I0701 14:28:51.568706 3741729 out.go:177] * Using the docker driver based on existing profile
	I0701 14:28:51.570295 3741729 start.go:297] selected driver: docker
	I0701 14:28:51.570311 3741729 start.go:901] validating driver "docker" against &{Name:functional-373457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-373457 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 14:28:51.570424 3741729 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 14:28:51.572880 3741729 out.go:177] 
	W0701 14:28:51.575075 3741729 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0701 14:28:51.576892 3741729 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-373457 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-373457 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-373457 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (180.221544ms)

                                                
                                                
-- stdout --
	* [functional-373457] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19166-3708336/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-3708336/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 14:28:56.357394 3742459 out.go:291] Setting OutFile to fd 1 ...
	I0701 14:28:56.357631 3742459 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:28:56.357657 3742459 out.go:304] Setting ErrFile to fd 2...
	I0701 14:28:56.357676 3742459 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:28:56.358080 3742459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-3708336/.minikube/bin
	I0701 14:28:56.358511 3742459 out.go:298] Setting JSON to false
	I0701 14:28:56.359589 3742459 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":166288,"bootTime":1719677849,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0701 14:28:56.359711 3742459 start.go:139] virtualization:  
	I0701 14:28:56.363833 3742459 out.go:177] * [functional-373457] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0701 14:28:56.366336 3742459 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 14:28:56.366500 3742459 notify.go:220] Checking for updates...
	I0701 14:28:56.370588 3742459 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 14:28:56.372656 3742459 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19166-3708336/kubeconfig
	I0701 14:28:56.374529 3742459 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-3708336/.minikube
	I0701 14:28:56.377138 3742459 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0701 14:28:56.379354 3742459 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 14:28:56.381774 3742459 config.go:182] Loaded profile config "functional-373457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:28:56.382296 3742459 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 14:28:56.410595 3742459 docker.go:122] docker version: linux-27.0.3:Docker Engine - Community
	I0701 14:28:56.410707 3742459 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 14:28:56.466836 3742459 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-01 14:28:56.456868863 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0701 14:28:56.466947 3742459 docker.go:295] overlay module found
	I0701 14:28:56.469252 3742459 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0701 14:28:56.471477 3742459 start.go:297] selected driver: docker
	I0701 14:28:56.471493 3742459 start.go:901] validating driver "docker" against &{Name:functional-373457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719413016-19142@sha256:af368900f8c68437efd9db2dba61a03b07068e5b9fe0dc8d7f46be199657779d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-373457 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0701 14:28:56.471597 3742459 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 14:28:56.474673 3742459 out.go:177] 
	W0701 14:28:56.477170 3742459 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0701 14:28:56.479180 3742459 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-373457 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-373457 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-6xgzw" [ef50d63e-018f-4a49-b256-229635f6c020] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-6xgzw" [ef50d63e-018f-4a49-b256-229635f6c020] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003795456s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:30199
functional_test.go:1671: http://192.168.49.2:30199: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-6xgzw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30199
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8f109b33-d07e-41c7-b518-a8766d9ea20f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004523422s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-373457 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-373457 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-373457 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-373457 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [763e7b50-e275-4863-a701-0a0161f59d60] Pending
helpers_test.go:344: "sp-pod" [763e7b50-e275-4863-a701-0a0161f59d60] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [763e7b50-e275-4863-a701-0a0161f59d60] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003749218s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-373457 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-373457 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-373457 delete -f testdata/storage-provisioner/pod.yaml: (1.060945942s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-373457 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d99a31e1-0a62-4f0b-b0e9-566620c30d01] Pending
helpers_test.go:344: "sp-pod" [d99a31e1-0a62-4f0b-b0e9-566620c30d01] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.00392484s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-373457 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.19s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh -n functional-373457 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 cp functional-373457:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1956678875/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh -n functional-373457 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh -n functional-373457 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/3713725/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh "sudo cat /etc/test/nested/copy/3713725/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/3713725.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh "sudo cat /etc/ssl/certs/3713725.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/3713725.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh "sudo cat /usr/share/ca-certificates/3713725.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/37137252.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh "sudo cat /etc/ssl/certs/37137252.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/37137252.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh "sudo cat /usr/share/ca-certificates/37137252.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-373457 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373457 ssh "sudo systemctl is-active docker": exit status 1 (326.362536ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373457 ssh "sudo systemctl is-active containerd": exit status 1 (356.365016ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 version -o=json --components
E0701 14:29:04.050244 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-373457 version -o=json --components: (1.480803432s)
--- PASS: TestFunctional/parallel/Version/components (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-373457 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-373457
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240513-cd2ac642
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-373457 image ls --format short --alsologtostderr:
I0701 14:29:05.818023 3743887 out.go:291] Setting OutFile to fd 1 ...
I0701 14:29:05.818161 3743887 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 14:29:05.818171 3743887 out.go:304] Setting ErrFile to fd 2...
I0701 14:29:05.818176 3743887 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 14:29:05.818430 3743887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-3708336/.minikube/bin
I0701 14:29:05.819031 3743887 config.go:182] Loaded profile config "functional-373457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0701 14:29:05.819157 3743887 config.go:182] Loaded profile config "functional-373457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0701 14:29:05.819702 3743887 cli_runner.go:164] Run: docker container inspect functional-373457 --format={{.State.Status}}
I0701 14:29:05.844339 3743887 ssh_runner.go:195] Run: systemctl --version
I0701 14:29:05.844396 3743887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373457
I0701 14:29:05.868614 3743887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/functional-373457/id_rsa Username:docker}
I0701 14:29:05.977127 3743887 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-373457 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.12-0           | 014faa467e297 | 140MB  |
| registry.k8s.io/kube-controller-manager | v1.30.2            | e1dcc3400d3ea | 108MB  |
| docker.io/kindest/kindnetd              | v20240513-cd2ac642 | 89d73d416b992 | 62MB   |
| docker.io/library/nginx                 | alpine             | 5461b18aaccf3 | 46.7MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/my-image                      | functional-373457  | 8de0f87ac99f1 | 1.64MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-scheduler          | v1.30.2            | c7dd04b1bafeb | 61.6MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/google-containers/addon-resizer  | functional-373457  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | 2437cf7621777 | 58.8MB |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/library/nginx                 | latest             | 0469e929ca632 | 197MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| registry.k8s.io/kube-apiserver          | v1.30.2            | 84c601f3f72c8 | 114MB  |
| registry.k8s.io/kube-proxy              | v1.30.2            | 66dbb96a9149f | 89.2MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-373457 image ls --format table --alsologtostderr:
I0701 14:29:08.930689 3744247 out.go:291] Setting OutFile to fd 1 ...
I0701 14:29:08.930916 3744247 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 14:29:08.930944 3744247 out.go:304] Setting ErrFile to fd 2...
I0701 14:29:08.930964 3744247 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 14:29:08.931232 3744247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-3708336/.minikube/bin
I0701 14:29:08.931940 3744247 config.go:182] Loaded profile config "functional-373457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0701 14:29:08.932108 3744247 config.go:182] Loaded profile config "functional-373457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0701 14:29:08.932633 3744247 cli_runner.go:164] Run: docker container inspect functional-373457 --format={{.State.Status}}
I0701 14:29:08.953483 3744247 ssh_runner.go:195] Run: systemctl --version
I0701 14:29:08.953539 3744247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373457
I0701 14:29:08.979084 3744247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/functional-373457/id_rsa Username:docker}
I0701 14:29:09.106422 3744247 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-373457 image ls --format json --alsologtostderr:
[{"id":"5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1","repoDigests":["docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55","docker.io/library/nginx@sha256:a7164ab2224553c2da2303d490474d4d546d2141eef1c6367a38d37d46992c62"],"repoTags":["docker.io/library/nginx:alpine"],"size":"46671377"},{"id":"0469e929ca6320c98871e17260708787ca6e6547a0b4b21a8854e455adac73df","repoDigests":["docker.io/library/nginx@sha256:9c367186df9a6b18c6735357b8eb7f407347e84aea09beb184961cb83543d46e","docker.io/library/nginx@sha256:fb3444ab758aa6b182f6152b9cc7231241911f60e61b928ee53036e4ba2c858b"],"repoTags":["docker.io/library/nginx:latest"],"size":"197097126"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k
8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-373457"],"size":"34114467"},{"id":"8de0f87ac99f15e21980c803263e4a3cbe16ed52df26b1b3fd40149716757374","repoDigests":["localh
ost/my-image@sha256:fd87735d209c057deef7c6378f372be4722ae11c223bf8781cab3a76ea8915b1"],"repoTags":["localhost/my-image:functional-373457"],"size":"1640226"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e","registry.k8s.io/kube-controller-manager@sha256:8ddc81caccc97ada7e3c53ebe2c03240f25cd123c479752a1c314c402b972028"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"108229958"},{"id":"66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae","repoDigests":["registry.k8s.io/kube-proxy@sha256:7df12f2b1bad9a90a39a1ca558501a4ba66b8943df1d5f2438788aa15c9d23ef","registry.k8s.io/kube-proxy@sha256:
8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"89199511"},{"id":"89d73d416b992e8f9602b67b4614d9e7f0655aebb3696e18efec695e0b654c40","repoDigests":["docker.io/kindest/kindnetd@sha256:1770ac17c925dfef54061d598c65310ff99269a3a77d5c7257f04366b38c64be","docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"],"repoTags":["docker.io/kindest/kindnetd:v20240513-cd2ac642"],"size":"62007858"},{"id":"16da03f508ae405244a9b02e45752d3dd137be7fad2bd8bcc76b562585156870","repoDigests":["docker.io/library/c8fe0c7e8d6f3a33ecff996b2d7802189f028c7ba1bbca19d4c757a6424789da-tmp@sha256:26f5d5234200790561798fa1fd3239b08d433d42c5ae9348e1d663e7864b6f42"],"repoTags":[],"size":"1637644"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:b
a9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"58812704"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b","registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"140414767"},{"id":"84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d","registry.k8s.io/kube-apiserver@sha256:74ea4e3a814490ffe1a66434837aea1e73006d559b65a
6321f3e41fc105845b7"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"113538528"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minik
ube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc","registry.k8s.io/kube-scheduler@sha256:96a3e2d1761583447d4ae302128b4956b855d14cdd5bf9ed4637d8b9f0c74a27"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"61568326"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09
b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-373457 image ls --format json --alsologtostderr:
I0701 14:29:08.808035 3744225 out.go:291] Setting OutFile to fd 1 ...
I0701 14:29:08.808195 3744225 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 14:29:08.808224 3744225 out.go:304] Setting ErrFile to fd 2...
I0701 14:29:08.808238 3744225 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 14:29:08.808521 3744225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-3708336/.minikube/bin
I0701 14:29:08.809297 3744225 config.go:182] Loaded profile config "functional-373457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0701 14:29:08.809488 3744225 config.go:182] Loaded profile config "functional-373457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0701 14:29:08.810099 3744225 cli_runner.go:164] Run: docker container inspect functional-373457 --format={{.State.Status}}
I0701 14:29:08.830663 3744225 ssh_runner.go:195] Run: systemctl --version
I0701 14:29:08.830724 3744225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373457
I0701 14:29:08.848230 3744225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/functional-373457/id_rsa Username:docker}
I0701 14:29:08.944163 3744225 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-373457 image ls --format yaml --alsologtostderr:
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-373457
size: "34114467"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1
repoDigests:
- docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55
- docker.io/library/nginx@sha256:a7164ab2224553c2da2303d490474d4d546d2141eef1c6367a38d37d46992c62
repoTags:
- docker.io/library/nginx:alpine
size: "46671377"
- id: 0469e929ca6320c98871e17260708787ca6e6547a0b4b21a8854e455adac73df
repoDigests:
- docker.io/library/nginx@sha256:9c367186df9a6b18c6735357b8eb7f407347e84aea09beb184961cb83543d46e
- docker.io/library/nginx@sha256:fb3444ab758aa6b182f6152b9cc7231241911f60e61b928ee53036e4ba2c858b
repoTags:
- docker.io/library/nginx:latest
size: "197097126"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d
- registry.k8s.io/kube-apiserver@sha256:74ea4e3a814490ffe1a66434837aea1e73006d559b65a6321f3e41fc105845b7
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "113538528"
- id: c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc
- registry.k8s.io/kube-scheduler@sha256:96a3e2d1761583447d4ae302128b4956b855d14cdd5bf9ed4637d8b9f0c74a27
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "61568326"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:ba9e70dbdf0ff8a77ea63451bb1241d08819471730fe7a35a218a8db2ef7890c
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "58812704"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
- registry.k8s.io/etcd@sha256:675d0e055ad04d9d6227bbb1aa88626bb1903a8b9b177c0353c6e1b3112952ad
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "140414767"
- id: e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e
- registry.k8s.io/kube-controller-manager@sha256:8ddc81caccc97ada7e3c53ebe2c03240f25cd123c479752a1c314c402b972028
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "108229958"
- id: 89d73d416b992e8f9602b67b4614d9e7f0655aebb3696e18efec695e0b654c40
repoDigests:
- docker.io/kindest/kindnetd@sha256:1770ac17c925dfef54061d598c65310ff99269a3a77d5c7257f04366b38c64be
- docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8
repoTags:
- docker.io/kindest/kindnetd:v20240513-cd2ac642
size: "62007858"
- id: 66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae
repoDigests:
- registry.k8s.io/kube-proxy@sha256:7df12f2b1bad9a90a39a1ca558501a4ba66b8943df1d5f2438788aa15c9d23ef
- registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "89199511"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-373457 image ls --format yaml --alsologtostderr:
I0701 14:29:06.119081 3743939 out.go:291] Setting OutFile to fd 1 ...
I0701 14:29:06.119625 3743939 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 14:29:06.119632 3743939 out.go:304] Setting ErrFile to fd 2...
I0701 14:29:06.119637 3743939 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 14:29:06.119890 3743939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-3708336/.minikube/bin
I0701 14:29:06.120552 3743939 config.go:182] Loaded profile config "functional-373457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0701 14:29:06.120673 3743939 config.go:182] Loaded profile config "functional-373457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0701 14:29:06.121237 3743939 cli_runner.go:164] Run: docker container inspect functional-373457 --format={{.State.Status}}
I0701 14:29:06.154972 3743939 ssh_runner.go:195] Run: systemctl --version
I0701 14:29:06.155036 3743939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373457
I0701 14:29:06.173865 3743939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/functional-373457/id_rsa Username:docker}
I0701 14:29:06.281591 3743939 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373457 ssh pgrep buildkitd: exit status 1 (269.72072ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 image build -t localhost/my-image:functional-373457 testdata/build --alsologtostderr
E0701 14:29:07.891775 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-373457 image build -t localhost/my-image:functional-373457 testdata/build --alsologtostderr: (1.921064137s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-373457 image build -t localhost/my-image:functional-373457 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 16da03f508a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-373457
--> 8de0f87ac99
Successfully tagged localhost/my-image:functional-373457
8de0f87ac99f15e21980c803263e4a3cbe16ed52df26b1b3fd40149716757374
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-373457 image build -t localhost/my-image:functional-373457 testdata/build --alsologtostderr:
I0701 14:29:06.642681 3744030 out.go:291] Setting OutFile to fd 1 ...
I0701 14:29:06.643580 3744030 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 14:29:06.643594 3744030 out.go:304] Setting ErrFile to fd 2...
I0701 14:29:06.643599 3744030 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0701 14:29:06.643856 3744030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-3708336/.minikube/bin
I0701 14:29:06.644537 3744030 config.go:182] Loaded profile config "functional-373457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0701 14:29:06.645175 3744030 config.go:182] Loaded profile config "functional-373457": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0701 14:29:06.645759 3744030 cli_runner.go:164] Run: docker container inspect functional-373457 --format={{.State.Status}}
I0701 14:29:06.662337 3744030 ssh_runner.go:195] Run: systemctl --version
I0701 14:29:06.662388 3744030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-373457
I0701 14:29:06.678989 3744030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33910 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/functional-373457/id_rsa Username:docker}
I0701 14:29:06.773724 3744030 build_images.go:161] Building image from path: /tmp/build.4153372617.tar
I0701 14:29:06.773790 3744030 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0701 14:29:06.783078 3744030 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4153372617.tar
I0701 14:29:06.787083 3744030 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4153372617.tar: stat -c "%s %y" /var/lib/minikube/build/build.4153372617.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4153372617.tar': No such file or directory
I0701 14:29:06.787115 3744030 ssh_runner.go:362] scp /tmp/build.4153372617.tar --> /var/lib/minikube/build/build.4153372617.tar (3072 bytes)
I0701 14:29:06.814932 3744030 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4153372617
I0701 14:29:06.823639 3744030 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4153372617 -xf /var/lib/minikube/build/build.4153372617.tar
I0701 14:29:06.832579 3744030 crio.go:315] Building image: /var/lib/minikube/build/build.4153372617
I0701 14:29:06.832661 3744030 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-373457 /var/lib/minikube/build/build.4153372617 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0701 14:29:08.491798 3744030 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-373457 /var/lib/minikube/build/build.4153372617 --cgroup-manager=cgroupfs: (1.659110177s)
I0701 14:29:08.491865 3744030 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4153372617
I0701 14:29:08.500859 3744030 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4153372617.tar
I0701 14:29:08.509786 3744030 build_images.go:217] Built localhost/my-image:functional-373457 from /tmp/build.4153372617.tar
I0701 14:29:08.509882 3744030 build_images.go:133] succeeded building to: functional-373457
I0701 14:29:08.509898 3744030 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 image ls
2024/07/01 14:29:08 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.643338077s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-373457
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 update-context --alsologtostderr -v=2
E0701 14:29:05.331225 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 image load --daemon gcr.io/google-containers/addon-resizer:functional-373457 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-373457 image load --daemon gcr.io/google-containers/addon-resizer:functional-373457 --alsologtostderr: (4.630874484s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "429.705446ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "54.676623ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "451.75335ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "63.901152ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-373457 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-373457 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-373457 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3739992: os: process already finished
helpers_test.go:508: unable to kill pid 3739865: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-373457 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-373457 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-373457 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3679ce36-6336-49e5-9b15-bef75b52e134] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3679ce36-6336-49e5-9b15-bef75b52e134] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004140739s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 image load --daemon gcr.io/google-containers/addon-resizer:functional-373457 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-373457 image load --daemon gcr.io/google-containers/addon-resizer:functional-373457 --alsologtostderr: (3.101960372s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.50831797s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-373457
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 image load --daemon gcr.io/google-containers/addon-resizer:functional-373457 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-373457 image load --daemon gcr.io/google-containers/addon-resizer:functional-373457 --alsologtostderr: (3.729247635s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 image save gcr.io/google-containers/addon-resizer:functional-373457 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 image rm gcr.io/google-containers/addon-resizer:functional-373457 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-373457 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.196.132 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-373457 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-373457 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.035383923s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-373457
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 image save --daemon gcr.io/google-containers/addon-resizer:functional-373457 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-373457
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-373457 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-373457 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-v2f2g" [c13aa3dd-e54c-4067-8cf0-e803f902f0fe] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-v2f2g" [c13aa3dd-e54c-4067-8cf0-e803f902f0fe] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004536954s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 service list -o json
functional_test.go:1490: Took "509.054929ms" to run "out/minikube-linux-arm64 -p functional-373457 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31354
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31354
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-373457 /tmp/TestFunctionalparallelMountCmdany-port3898459882/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1719844131815302188" to /tmp/TestFunctionalparallelMountCmdany-port3898459882/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1719844131815302188" to /tmp/TestFunctionalparallelMountCmdany-port3898459882/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1719844131815302188" to /tmp/TestFunctionalparallelMountCmdany-port3898459882/001/test-1719844131815302188
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373457 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (369.027184ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul  1 14:28 created-by-test
-rw-r--r-- 1 docker docker 24 Jul  1 14:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul  1 14:28 test-1719844131815302188
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh cat /mount-9p/test-1719844131815302188
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-373457 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [320a9ba9-27a9-49f7-acf2-fd27150fd214] Pending
helpers_test.go:344: "busybox-mount" [320a9ba9-27a9-49f7-acf2-fd27150fd214] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [320a9ba9-27a9-49f7-acf2-fd27150fd214] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [320a9ba9-27a9-49f7-acf2-fd27150fd214] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.029222699s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-373457 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-373457 /tmp/TestFunctionalparallelMountCmdany-port3898459882/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-373457 /tmp/TestFunctionalparallelMountCmdspecific-port2896567186/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373457 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (551.935953ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-373457 /tmp/TestFunctionalparallelMountCmdspecific-port2896567186/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-373457 ssh "sudo umount -f /mount-9p": exit status 1 (391.707935ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-373457 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-373457 /tmp/TestFunctionalparallelMountCmdspecific-port2896567186/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-373457 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1587027794/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-373457 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1587027794/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-373457 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1587027794/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-373457 ssh "findmnt -T" /mount1: (1.091540069s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh "findmnt -T" /mount2
E0701 14:29:02.764952 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
E0701 14:29:02.771685 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
E0701 14:29:02.783665 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
E0701 14:29:02.804886 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
E0701 14:29:02.847358 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
E0701 14:29:02.927564 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
E0701 14:29:03.088619 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-373457 ssh "findmnt -T" /mount3
E0701 14:29:03.409486 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-373457 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-373457 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1587027794/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-373457 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1587027794/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-373457 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1587027794/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.93s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-373457
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-373457
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-373457
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (165.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-767646 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0701 14:29:13.012333 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
E0701 14:29:23.252696 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
E0701 14:29:43.733841 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
E0701 14:30:24.694701 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
E0701 14:31:46.615735 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-767646 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m44.272247582s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (165.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767646 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767646 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-767646 -- rollout status deployment/busybox: (4.405055924s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767646 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767646 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767646 -- exec busybox-fc5497c4f-8877b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767646 -- exec busybox-fc5497c4f-8bxff -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767646 -- exec busybox-fc5497c4f-zmcqt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767646 -- exec busybox-fc5497c4f-8877b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767646 -- exec busybox-fc5497c4f-8bxff -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767646 -- exec busybox-fc5497c4f-zmcqt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767646 -- exec busybox-fc5497c4f-8877b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767646 -- exec busybox-fc5497c4f-8bxff -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767646 -- exec busybox-fc5497c4f-zmcqt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767646 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767646 -- exec busybox-fc5497c4f-8877b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767646 -- exec busybox-fc5497c4f-8877b -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767646 -- exec busybox-fc5497c4f-8bxff -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767646 -- exec busybox-fc5497c4f-8bxff -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767646 -- exec busybox-fc5497c4f-zmcqt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-767646 -- exec busybox-fc5497c4f-zmcqt -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-767646 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-767646 -v=7 --alsologtostderr: (55.184224441s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-767646 status -v=7 --alsologtostderr: (1.052190231s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-767646 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-767646 status --output json -v=7 --alsologtostderr: (1.007395955s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 cp testdata/cp-test.txt ha-767646:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 cp ha-767646:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1428980662/001/cp-test_ha-767646.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 cp ha-767646:/home/docker/cp-test.txt ha-767646-m02:/home/docker/cp-test_ha-767646_ha-767646-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m02 "sudo cat /home/docker/cp-test_ha-767646_ha-767646-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 cp ha-767646:/home/docker/cp-test.txt ha-767646-m03:/home/docker/cp-test_ha-767646_ha-767646-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m03 "sudo cat /home/docker/cp-test_ha-767646_ha-767646-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 cp ha-767646:/home/docker/cp-test.txt ha-767646-m04:/home/docker/cp-test_ha-767646_ha-767646-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m04 "sudo cat /home/docker/cp-test_ha-767646_ha-767646-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 cp testdata/cp-test.txt ha-767646-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 cp ha-767646-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1428980662/001/cp-test_ha-767646-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 cp ha-767646-m02:/home/docker/cp-test.txt ha-767646:/home/docker/cp-test_ha-767646-m02_ha-767646.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646 "sudo cat /home/docker/cp-test_ha-767646-m02_ha-767646.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 cp ha-767646-m02:/home/docker/cp-test.txt ha-767646-m03:/home/docker/cp-test_ha-767646-m02_ha-767646-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m03 "sudo cat /home/docker/cp-test_ha-767646-m02_ha-767646-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 cp ha-767646-m02:/home/docker/cp-test.txt ha-767646-m04:/home/docker/cp-test_ha-767646-m02_ha-767646-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m04 "sudo cat /home/docker/cp-test_ha-767646-m02_ha-767646-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 cp testdata/cp-test.txt ha-767646-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 cp ha-767646-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1428980662/001/cp-test_ha-767646-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 cp ha-767646-m03:/home/docker/cp-test.txt ha-767646:/home/docker/cp-test_ha-767646-m03_ha-767646.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646 "sudo cat /home/docker/cp-test_ha-767646-m03_ha-767646.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 cp ha-767646-m03:/home/docker/cp-test.txt ha-767646-m02:/home/docker/cp-test_ha-767646-m03_ha-767646-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m02 "sudo cat /home/docker/cp-test_ha-767646-m03_ha-767646-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 cp ha-767646-m03:/home/docker/cp-test.txt ha-767646-m04:/home/docker/cp-test_ha-767646-m03_ha-767646-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m04 "sudo cat /home/docker/cp-test_ha-767646-m03_ha-767646-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 cp testdata/cp-test.txt ha-767646-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 cp ha-767646-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1428980662/001/cp-test_ha-767646-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 cp ha-767646-m04:/home/docker/cp-test.txt ha-767646:/home/docker/cp-test_ha-767646-m04_ha-767646.txt
E0701 14:33:19.347962 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
E0701 14:33:19.353228 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
E0701 14:33:19.363903 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
E0701 14:33:19.384151 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
E0701 14:33:19.424323 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m04 "sudo cat /home/docker/cp-test.txt"
E0701 14:33:19.505337 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
E0701 14:33:19.665680 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646 "sudo cat /home/docker/cp-test_ha-767646-m04_ha-767646.txt"
E0701 14:33:19.986160 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 cp ha-767646-m04:/home/docker/cp-test.txt ha-767646-m02:/home/docker/cp-test_ha-767646-m04_ha-767646-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m04 "sudo cat /home/docker/cp-test.txt"
E0701 14:33:20.626516 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m02 "sudo cat /home/docker/cp-test_ha-767646-m04_ha-767646-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 cp ha-767646-m04:/home/docker/cp-test.txt ha-767646-m03:/home/docker/cp-test_ha-767646-m04_ha-767646-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m04 "sudo cat /home/docker/cp-test.txt"
E0701 14:33:21.907145 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 ssh -n ha-767646-m03 "sudo cat /home/docker/cp-test_ha-767646-m04_ha-767646-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 node stop m02 -v=7 --alsologtostderr
E0701 14:33:24.467948 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
E0701 14:33:29.588887 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-767646 node stop m02 -v=7 --alsologtostderr: (12.000019588s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-767646 status -v=7 --alsologtostderr: exit status 7 (745.444626ms)

                                                
                                                
-- stdout --
	ha-767646
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-767646-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-767646-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-767646-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 14:33:34.361625 3759994 out.go:291] Setting OutFile to fd 1 ...
	I0701 14:33:34.361846 3759994 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:33:34.361860 3759994 out.go:304] Setting ErrFile to fd 2...
	I0701 14:33:34.361867 3759994 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:33:34.362298 3759994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-3708336/.minikube/bin
	I0701 14:33:34.362596 3759994 out.go:298] Setting JSON to false
	I0701 14:33:34.362629 3759994 mustload.go:65] Loading cluster: ha-767646
	I0701 14:33:34.363882 3759994 config.go:182] Loaded profile config "ha-767646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:33:34.363908 3759994 status.go:255] checking status of ha-767646 ...
	I0701 14:33:34.365675 3759994 notify.go:220] Checking for updates...
	I0701 14:33:34.366951 3759994 cli_runner.go:164] Run: docker container inspect ha-767646 --format={{.State.Status}}
	I0701 14:33:34.387431 3759994 status.go:330] ha-767646 host status = "Running" (err=<nil>)
	I0701 14:33:34.387454 3759994 host.go:66] Checking if "ha-767646" exists ...
	I0701 14:33:34.387899 3759994 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-767646
	I0701 14:33:34.416214 3759994 host.go:66] Checking if "ha-767646" exists ...
	I0701 14:33:34.416670 3759994 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 14:33:34.416721 3759994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646
	I0701 14:33:34.434896 3759994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33915 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/ha-767646/id_rsa Username:docker}
	I0701 14:33:34.534284 3759994 ssh_runner.go:195] Run: systemctl --version
	I0701 14:33:34.538546 3759994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 14:33:34.550997 3759994 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 14:33:34.619620 3759994 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-07-01 14:33:34.609563846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0701 14:33:34.620203 3759994 kubeconfig.go:125] found "ha-767646" server: "https://192.168.49.254:8443"
	I0701 14:33:34.620227 3759994 api_server.go:166] Checking apiserver status ...
	I0701 14:33:34.620267 3759994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 14:33:34.633334 3759994 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1471/cgroup
	I0701 14:33:34.643004 3759994 api_server.go:182] apiserver freezer: "6:freezer:/docker/b07ec69b038ff2dfbfa3d1835c65a1e7dca78356fdd4dcc2b404f75a589c6fb5/crio/crio-7ecb3d6e7f763ba591091deffef0f6d80430feb8829dfe3abacb3772df49ae9d"
	I0701 14:33:34.643074 3759994 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b07ec69b038ff2dfbfa3d1835c65a1e7dca78356fdd4dcc2b404f75a589c6fb5/crio/crio-7ecb3d6e7f763ba591091deffef0f6d80430feb8829dfe3abacb3772df49ae9d/freezer.state
	I0701 14:33:34.651805 3759994 api_server.go:204] freezer state: "THAWED"
	I0701 14:33:34.651833 3759994 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0701 14:33:34.660855 3759994 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0701 14:33:34.660886 3759994 status.go:422] ha-767646 apiserver status = Running (err=<nil>)
	I0701 14:33:34.660897 3759994 status.go:257] ha-767646 status: &{Name:ha-767646 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0701 14:33:34.660923 3759994 status.go:255] checking status of ha-767646-m02 ...
	I0701 14:33:34.661283 3759994 cli_runner.go:164] Run: docker container inspect ha-767646-m02 --format={{.State.Status}}
	I0701 14:33:34.678000 3759994 status.go:330] ha-767646-m02 host status = "Stopped" (err=<nil>)
	I0701 14:33:34.678022 3759994 status.go:343] host is not running, skipping remaining checks
	I0701 14:33:34.678030 3759994 status.go:257] ha-767646-m02 status: &{Name:ha-767646-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0701 14:33:34.678051 3759994 status.go:255] checking status of ha-767646-m03 ...
	I0701 14:33:34.678394 3759994 cli_runner.go:164] Run: docker container inspect ha-767646-m03 --format={{.State.Status}}
	I0701 14:33:34.695103 3759994 status.go:330] ha-767646-m03 host status = "Running" (err=<nil>)
	I0701 14:33:34.695128 3759994 host.go:66] Checking if "ha-767646-m03" exists ...
	I0701 14:33:34.695635 3759994 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-767646-m03
	I0701 14:33:34.711485 3759994 host.go:66] Checking if "ha-767646-m03" exists ...
	I0701 14:33:34.711822 3759994 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 14:33:34.711867 3759994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646-m03
	I0701 14:33:34.729660 3759994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33925 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/ha-767646-m03/id_rsa Username:docker}
	I0701 14:33:34.826781 3759994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 14:33:34.839394 3759994 kubeconfig.go:125] found "ha-767646" server: "https://192.168.49.254:8443"
	I0701 14:33:34.839419 3759994 api_server.go:166] Checking apiserver status ...
	I0701 14:33:34.839460 3759994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 14:33:34.853230 3759994 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	I0701 14:33:34.863214 3759994 api_server.go:182] apiserver freezer: "6:freezer:/docker/1dd8f5025fdce18382469f80f71eace40a5ce366cbcb1f5b4bea21982b09ca32/crio/crio-f665d8e8dc8311167519597985b4efbc3761ba0b81cc7972392ed2ccc141eeb5"
	I0701 14:33:34.863336 3759994 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1dd8f5025fdce18382469f80f71eace40a5ce366cbcb1f5b4bea21982b09ca32/crio/crio-f665d8e8dc8311167519597985b4efbc3761ba0b81cc7972392ed2ccc141eeb5/freezer.state
	I0701 14:33:34.872317 3759994 api_server.go:204] freezer state: "THAWED"
	I0701 14:33:34.872347 3759994 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0701 14:33:34.880126 3759994 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0701 14:33:34.880180 3759994 status.go:422] ha-767646-m03 apiserver status = Running (err=<nil>)
	I0701 14:33:34.880190 3759994 status.go:257] ha-767646-m03 status: &{Name:ha-767646-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0701 14:33:34.880212 3759994 status.go:255] checking status of ha-767646-m04 ...
	I0701 14:33:34.880513 3759994 cli_runner.go:164] Run: docker container inspect ha-767646-m04 --format={{.State.Status}}
	I0701 14:33:34.906250 3759994 status.go:330] ha-767646-m04 host status = "Running" (err=<nil>)
	I0701 14:33:34.906277 3759994 host.go:66] Checking if "ha-767646-m04" exists ...
	I0701 14:33:34.906587 3759994 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-767646-m04
	I0701 14:33:34.928269 3759994 host.go:66] Checking if "ha-767646-m04" exists ...
	I0701 14:33:34.928592 3759994 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 14:33:34.928637 3759994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-767646-m04
	I0701 14:33:34.945764 3759994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33930 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/ha-767646-m04/id_rsa Username:docker}
	I0701 14:33:35.042447 3759994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 14:33:35.055012 3759994 status.go:257] ha-767646-m04 status: &{Name:ha-767646-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (34.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 node start m02 -v=7 --alsologtostderr
E0701 14:33:39.829658 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
E0701 14:34:00.310766 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
E0701 14:34:02.765597 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-767646 node start m02 -v=7 --alsologtostderr: (32.858711133s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-767646 status -v=7 --alsologtostderr: (1.231935803s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (34.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (4.486119156s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (208.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-767646 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-767646 -v=7 --alsologtostderr
E0701 14:34:30.455972 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
E0701 14:34:41.271644 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-767646 -v=7 --alsologtostderr: (36.92300942s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-767646 --wait=true -v=7 --alsologtostderr
E0701 14:36:03.192431 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-767646 --wait=true -v=7 --alsologtostderr: (2m51.770842477s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-767646
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (208.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-767646 node delete m03 -v=7 --alsologtostderr: (11.196624291s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 stop -v=7 --alsologtostderr
E0701 14:38:19.349633 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-767646 stop -v=7 --alsologtostderr: (35.617294266s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-767646 status -v=7 --alsologtostderr: exit status 7 (110.737417ms)

                                                
                                                
-- stdout --
	ha-767646
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-767646-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-767646-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 14:38:31.585590 3774509 out.go:291] Setting OutFile to fd 1 ...
	I0701 14:38:31.585800 3774509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:38:31.585826 3774509 out.go:304] Setting ErrFile to fd 2...
	I0701 14:38:31.585844 3774509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:38:31.586135 3774509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-3708336/.minikube/bin
	I0701 14:38:31.586368 3774509 out.go:298] Setting JSON to false
	I0701 14:38:31.586425 3774509 mustload.go:65] Loading cluster: ha-767646
	I0701 14:38:31.586450 3774509 notify.go:220] Checking for updates...
	I0701 14:38:31.586887 3774509 config.go:182] Loaded profile config "ha-767646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:38:31.586908 3774509 status.go:255] checking status of ha-767646 ...
	I0701 14:38:31.587734 3774509 cli_runner.go:164] Run: docker container inspect ha-767646 --format={{.State.Status}}
	I0701 14:38:31.605123 3774509 status.go:330] ha-767646 host status = "Stopped" (err=<nil>)
	I0701 14:38:31.605147 3774509 status.go:343] host is not running, skipping remaining checks
	I0701 14:38:31.605154 3774509 status.go:257] ha-767646 status: &{Name:ha-767646 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0701 14:38:31.605191 3774509 status.go:255] checking status of ha-767646-m02 ...
	I0701 14:38:31.605580 3774509 cli_runner.go:164] Run: docker container inspect ha-767646-m02 --format={{.State.Status}}
	I0701 14:38:31.629814 3774509 status.go:330] ha-767646-m02 host status = "Stopped" (err=<nil>)
	I0701 14:38:31.629838 3774509 status.go:343] host is not running, skipping remaining checks
	I0701 14:38:31.629846 3774509 status.go:257] ha-767646-m02 status: &{Name:ha-767646-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0701 14:38:31.629865 3774509 status.go:255] checking status of ha-767646-m04 ...
	I0701 14:38:31.630167 3774509 cli_runner.go:164] Run: docker container inspect ha-767646-m04 --format={{.State.Status}}
	I0701 14:38:31.648698 3774509 status.go:330] ha-767646-m04 host status = "Stopped" (err=<nil>)
	I0701 14:38:31.648721 3774509 status.go:343] host is not running, skipping remaining checks
	I0701 14:38:31.648741 3774509 status.go:257] ha-767646-m04 status: &{Name:ha-767646-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (64.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-767646 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-767646 --control-plane -v=7 --alsologtostderr: (1m3.097456073s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-767646 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (64.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.81s)

                                                
                                    
x
+
TestJSONOutput/start/Command (76.25s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-214847 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-214847 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m16.248183934s)
--- PASS: TestJSONOutput/start/Command (76.25s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-214847 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-214847 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.92s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-214847 --output=json --user=testUser
E0701 14:43:19.351198 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-214847 --output=json --user=testUser: (5.920723893s)
--- PASS: TestJSONOutput/stop/Command (5.92s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-869575 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-869575 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (75.28302ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9d43459a-b3b7-455e-aa26-c248a87e5f75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-869575] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f4b0f128-8145-4baa-b57d-d3f12bed2411","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19166"}}
	{"specversion":"1.0","id":"6aaa50ce-04c7-48e6-a358-33ac4eb08ad9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1369f020-ebc1-40f7-bffa-01060f1a12c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19166-3708336/kubeconfig"}}
	{"specversion":"1.0","id":"6bb1e9b9-6aab-460c-a142-7ae5f07d7201","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-3708336/.minikube"}}
	{"specversion":"1.0","id":"162865f0-c27e-4cab-ab53-18cc8ffdbcd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"46956626-ead0-4aed-8b91-2bcc3189ff63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"448349c5-0250-4200-8655-2804b7ee1d8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-869575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-869575
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (43.64s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-460389 --network=
E0701 14:44:02.765344 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-460389 --network=: (41.537517943s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-460389" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-460389
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-460389: (2.076381975s)
--- PASS: TestKicCustomNetwork/create_custom_network (43.64s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (38.72s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-169909 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-169909 --network=bridge: (36.787960389s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-169909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-169909
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-169909: (1.915131741s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (38.72s)

                                                
                                    
x
+
TestKicExistingNetwork (33.69s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-182988 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-182988 --network=existing-network: (31.528857172s)
helpers_test.go:175: Cleaning up "existing-network-182988" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-182988
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-182988: (2.011784734s)
--- PASS: TestKicExistingNetwork (33.69s)

                                                
                                    
x
+
TestKicCustomSubnet (34.83s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-123559 --subnet=192.168.60.0/24
E0701 14:45:25.816119 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-123559 --subnet=192.168.60.0/24: (32.685614257s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-123559 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-123559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-123559
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-123559: (2.123433496s)
--- PASS: TestKicCustomSubnet (34.83s)

                                                
                                    
x
+
TestKicStaticIP (36.11s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-230688 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-230688 --static-ip=192.168.200.200: (33.845710421s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-230688 ip
helpers_test.go:175: Cleaning up "static-ip-230688" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-230688
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-230688: (2.112156301s)
--- PASS: TestKicStaticIP (36.11s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (63.4s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-226510 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-226510 --driver=docker  --container-runtime=crio: (28.662319404s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-228939 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-228939 --driver=docker  --container-runtime=crio: (29.62559054s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-226510
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-228939
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-228939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-228939
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-228939: (1.94613549s)
helpers_test.go:175: Cleaning up "first-226510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-226510
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-226510: (1.929442467s)
--- PASS: TestMinikubeProfile (63.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-369482 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-369482 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.907320908s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-369482 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-382421 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-382421 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.488174328s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-382421 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-369482 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-369482 --alsologtostderr -v=5: (1.633050806s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-382421 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-382421
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-382421: (1.196982886s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.57s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-382421
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-382421: (7.57095121s)
--- PASS: TestMountStart/serial/RestartStopped (8.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-382421 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (125.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-907469 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0701 14:48:19.348432 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
E0701 14:49:02.765202 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
E0701 14:49:42.393823 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-907469 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m5.019674772s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (125.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907469 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907469 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-907469 -- rollout status deployment/busybox: (3.949953852s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907469 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907469 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907469 -- exec busybox-fc5497c4f-9gkqz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907469 -- exec busybox-fc5497c4f-bdx5m -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907469 -- exec busybox-fc5497c4f-9gkqz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907469 -- exec busybox-fc5497c4f-bdx5m -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907469 -- exec busybox-fc5497c4f-9gkqz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907469 -- exec busybox-fc5497c4f-bdx5m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.99s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907469 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907469 -- exec busybox-fc5497c4f-9gkqz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907469 -- exec busybox-fc5497c4f-9gkqz -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907469 -- exec busybox-fc5497c4f-bdx5m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-907469 -- exec busybox-fc5497c4f-bdx5m -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-907469 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-907469 -v 3 --alsologtostderr: (46.019458693s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.67s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-907469 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 cp testdata/cp-test.txt multinode-907469:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 ssh -n multinode-907469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 cp multinode-907469:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2365738575/001/cp-test_multinode-907469.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 ssh -n multinode-907469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 cp multinode-907469:/home/docker/cp-test.txt multinode-907469-m02:/home/docker/cp-test_multinode-907469_multinode-907469-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 ssh -n multinode-907469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 ssh -n multinode-907469-m02 "sudo cat /home/docker/cp-test_multinode-907469_multinode-907469-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 cp multinode-907469:/home/docker/cp-test.txt multinode-907469-m03:/home/docker/cp-test_multinode-907469_multinode-907469-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 ssh -n multinode-907469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 ssh -n multinode-907469-m03 "sudo cat /home/docker/cp-test_multinode-907469_multinode-907469-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 cp testdata/cp-test.txt multinode-907469-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 ssh -n multinode-907469-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 cp multinode-907469-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2365738575/001/cp-test_multinode-907469-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 ssh -n multinode-907469-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 cp multinode-907469-m02:/home/docker/cp-test.txt multinode-907469:/home/docker/cp-test_multinode-907469-m02_multinode-907469.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 ssh -n multinode-907469-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 ssh -n multinode-907469 "sudo cat /home/docker/cp-test_multinode-907469-m02_multinode-907469.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 cp multinode-907469-m02:/home/docker/cp-test.txt multinode-907469-m03:/home/docker/cp-test_multinode-907469-m02_multinode-907469-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 ssh -n multinode-907469-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 ssh -n multinode-907469-m03 "sudo cat /home/docker/cp-test_multinode-907469-m02_multinode-907469-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 cp testdata/cp-test.txt multinode-907469-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 ssh -n multinode-907469-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 cp multinode-907469-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2365738575/001/cp-test_multinode-907469-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 ssh -n multinode-907469-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 cp multinode-907469-m03:/home/docker/cp-test.txt multinode-907469:/home/docker/cp-test_multinode-907469-m03_multinode-907469.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 ssh -n multinode-907469-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 ssh -n multinode-907469 "sudo cat /home/docker/cp-test_multinode-907469-m03_multinode-907469.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 cp multinode-907469-m03:/home/docker/cp-test.txt multinode-907469-m02:/home/docker/cp-test_multinode-907469-m03_multinode-907469-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 ssh -n multinode-907469-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 ssh -n multinode-907469-m02 "sudo cat /home/docker/cp-test_multinode-907469-m03_multinode-907469-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-907469 node stop m03: (1.210728223s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-907469 status: exit status 7 (506.194768ms)

                                                
                                                
-- stdout --
	multinode-907469
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-907469-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-907469-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-907469 status --alsologtostderr: exit status 7 (510.280241ms)

                                                
                                                
-- stdout --
	multinode-907469
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-907469-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-907469-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 14:51:15.520359 3829475 out.go:291] Setting OutFile to fd 1 ...
	I0701 14:51:15.521144 3829475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:51:15.521181 3829475 out.go:304] Setting ErrFile to fd 2...
	I0701 14:51:15.521206 3829475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:51:15.521566 3829475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-3708336/.minikube/bin
	I0701 14:51:15.521811 3829475 out.go:298] Setting JSON to false
	I0701 14:51:15.521881 3829475 mustload.go:65] Loading cluster: multinode-907469
	I0701 14:51:15.522625 3829475 config.go:182] Loaded profile config "multinode-907469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:51:15.523155 3829475 status.go:255] checking status of multinode-907469 ...
	I0701 14:51:15.524192 3829475 cli_runner.go:164] Run: docker container inspect multinode-907469 --format={{.State.Status}}
	I0701 14:51:15.523116 3829475 notify.go:220] Checking for updates...
	I0701 14:51:15.548907 3829475 status.go:330] multinode-907469 host status = "Running" (err=<nil>)
	I0701 14:51:15.548938 3829475 host.go:66] Checking if "multinode-907469" exists ...
	I0701 14:51:15.549301 3829475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-907469
	I0701 14:51:15.573925 3829475 host.go:66] Checking if "multinode-907469" exists ...
	I0701 14:51:15.574244 3829475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 14:51:15.574295 3829475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-907469
	I0701 14:51:15.597267 3829475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34035 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/multinode-907469/id_rsa Username:docker}
	I0701 14:51:15.690290 3829475 ssh_runner.go:195] Run: systemctl --version
	I0701 14:51:15.694743 3829475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 14:51:15.705931 3829475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 14:51:15.763632 3829475 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-07-01 14:51:15.753749698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0701 14:51:15.764236 3829475 kubeconfig.go:125] found "multinode-907469" server: "https://192.168.67.2:8443"
	I0701 14:51:15.764264 3829475 api_server.go:166] Checking apiserver status ...
	I0701 14:51:15.764306 3829475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0701 14:51:15.775017 3829475 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1432/cgroup
	I0701 14:51:15.784428 3829475 api_server.go:182] apiserver freezer: "6:freezer:/docker/4ec0574ae62ac7e1029fb05356ae25c6f6800163985a36e5ba2275d846e9644d/crio/crio-66c59c066daf515b9627714293d7a26b281c76d109a365a3d06be6d0f4c1f823"
	I0701 14:51:15.784499 3829475 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4ec0574ae62ac7e1029fb05356ae25c6f6800163985a36e5ba2275d846e9644d/crio/crio-66c59c066daf515b9627714293d7a26b281c76d109a365a3d06be6d0f4c1f823/freezer.state
	I0701 14:51:15.792973 3829475 api_server.go:204] freezer state: "THAWED"
	I0701 14:51:15.793005 3829475 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0701 14:51:15.800515 3829475 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0701 14:51:15.800543 3829475 status.go:422] multinode-907469 apiserver status = Running (err=<nil>)
	I0701 14:51:15.800561 3829475 status.go:257] multinode-907469 status: &{Name:multinode-907469 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0701 14:51:15.800611 3829475 status.go:255] checking status of multinode-907469-m02 ...
	I0701 14:51:15.800931 3829475 cli_runner.go:164] Run: docker container inspect multinode-907469-m02 --format={{.State.Status}}
	I0701 14:51:15.817318 3829475 status.go:330] multinode-907469-m02 host status = "Running" (err=<nil>)
	I0701 14:51:15.817342 3829475 host.go:66] Checking if "multinode-907469-m02" exists ...
	I0701 14:51:15.817662 3829475 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-907469-m02
	I0701 14:51:15.833449 3829475 host.go:66] Checking if "multinode-907469-m02" exists ...
	I0701 14:51:15.833769 3829475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0701 14:51:15.833815 3829475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-907469-m02
	I0701 14:51:15.849864 3829475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34040 SSHKeyPath:/home/jenkins/minikube-integration/19166-3708336/.minikube/machines/multinode-907469-m02/id_rsa Username:docker}
	I0701 14:51:15.946310 3829475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0701 14:51:15.957949 3829475 status.go:257] multinode-907469-m02 status: &{Name:multinode-907469-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0701 14:51:15.957987 3829475 status.go:255] checking status of multinode-907469-m03 ...
	I0701 14:51:15.958333 3829475 cli_runner.go:164] Run: docker container inspect multinode-907469-m03 --format={{.State.Status}}
	I0701 14:51:15.974677 3829475 status.go:330] multinode-907469-m03 host status = "Stopped" (err=<nil>)
	I0701 14:51:15.974703 3829475 status.go:343] host is not running, skipping remaining checks
	I0701 14:51:15.974717 3829475 status.go:257] multinode-907469-m03 status: &{Name:multinode-907469-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-907469 node start m03 -v=7 --alsologtostderr: (9.07362998s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-907469
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-907469
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-907469: (24.762404072s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-907469 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-907469 --wait=true -v=8 --alsologtostderr: (57.038179421s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-907469
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-907469 node delete m03: (4.619300062s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-907469 stop: (23.642566626s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-907469 status: exit status 7 (88.685039ms)

                                                
                                                
-- stdout --
	multinode-907469
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-907469-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-907469 status --alsologtostderr: exit status 7 (90.358138ms)

                                                
                                                
-- stdout --
	multinode-907469
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-907469-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 14:53:16.821275 3836903 out.go:291] Setting OutFile to fd 1 ...
	I0701 14:53:16.821451 3836903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:53:16.821481 3836903 out.go:304] Setting ErrFile to fd 2...
	I0701 14:53:16.821500 3836903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 14:53:16.821744 3836903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-3708336/.minikube/bin
	I0701 14:53:16.821955 3836903 out.go:298] Setting JSON to false
	I0701 14:53:16.822012 3836903 mustload.go:65] Loading cluster: multinode-907469
	I0701 14:53:16.822043 3836903 notify.go:220] Checking for updates...
	I0701 14:53:16.822468 3836903 config.go:182] Loaded profile config "multinode-907469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 14:53:16.822499 3836903 status.go:255] checking status of multinode-907469 ...
	I0701 14:53:16.823044 3836903 cli_runner.go:164] Run: docker container inspect multinode-907469 --format={{.State.Status}}
	I0701 14:53:16.841381 3836903 status.go:330] multinode-907469 host status = "Stopped" (err=<nil>)
	I0701 14:53:16.841403 3836903 status.go:343] host is not running, skipping remaining checks
	I0701 14:53:16.841410 3836903 status.go:257] multinode-907469 status: &{Name:multinode-907469 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0701 14:53:16.841433 3836903 status.go:255] checking status of multinode-907469-m02 ...
	I0701 14:53:16.841757 3836903 cli_runner.go:164] Run: docker container inspect multinode-907469-m02 --format={{.State.Status}}
	I0701 14:53:16.866597 3836903 status.go:330] multinode-907469-m02 host status = "Stopped" (err=<nil>)
	I0701 14:53:16.866621 3836903 status.go:343] host is not running, skipping remaining checks
	I0701 14:53:16.866629 3836903 status.go:257] multinode-907469-m02 status: &{Name:multinode-907469-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-907469 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0701 14:53:19.348374 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
E0701 14:54:02.765555 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-907469 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (54.76311043s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-907469 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.44s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-907469
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-907469-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-907469-m02 --driver=docker  --container-runtime=crio: exit status 14 (78.804858ms)

                                                
                                                
-- stdout --
	* [multinode-907469-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19166-3708336/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-3708336/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-907469-m02' is duplicated with machine name 'multinode-907469-m02' in profile 'multinode-907469'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-907469-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-907469-m03 --driver=docker  --container-runtime=crio: (32.812407111s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-907469
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-907469: exit status 80 (366.641092ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-907469 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-907469-m03 already exists in multinode-907469-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_8.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-907469-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-907469-m03: (1.963304084s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.28s)

                                                
                                    
x
+
TestPreload (113.81s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-745396 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-745396 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m22.238930945s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-745396 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-745396 image pull gcr.io/k8s-minikube/busybox: (1.853076299s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-745396
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-745396: (5.753983569s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-745396 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-745396 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (21.125066262s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-745396 image list
helpers_test.go:175: Cleaning up "test-preload-745396" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-745396
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-745396: (2.517050367s)
--- PASS: TestPreload (113.81s)

                                                
                                    
x
+
TestScheduledStopUnix (103.05s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-570641 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-570641 --memory=2048 --driver=docker  --container-runtime=crio: (27.715279227s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-570641 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-570641 -n scheduled-stop-570641
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-570641 -n scheduled-stop-570641: exit status 85 (70.09798ms)

                                                
                                                
-- stdout --
	* Profile "scheduled-stop-570641" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p scheduled-stop-570641"

                                                
                                                
-- /stdout --
scheduled_stop_test.go:191: status error: exit status 85 (may be ok)
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-570641 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-570641 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-570641 -n scheduled-stop-570641
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-570641
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-570641 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0701 14:58:19.348477 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-570641
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-570641: exit status 7 (66.265473ms)

                                                
                                                
-- stdout --
	scheduled-stop-570641
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-570641 -n scheduled-stop-570641
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-570641 -n scheduled-stop-570641: exit status 7 (65.431055ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-570641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-570641
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-570641: (4.10574947s)
--- PASS: TestScheduledStopUnix (103.05s)

                                                
                                    
x
+
TestInsufficientStorage (10.48s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-541742 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-541742 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.987857198s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"093415f6-225c-40fb-8cd4-59cc3a3363d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-541742] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9cd9b20b-20eb-4a59-972c-b98f68a1c206","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19166"}}
	{"specversion":"1.0","id":"e725286a-47e8-4735-a143-bf5b19f0d2c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d382e937-413c-4b61-b56e-04579fa62962","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19166-3708336/kubeconfig"}}
	{"specversion":"1.0","id":"571f83bf-2566-4df1-a50f-be6214410350","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-3708336/.minikube"}}
	{"specversion":"1.0","id":"2345c774-49b3-4d91-a6fb-bf11f524b94a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"2c91c0dc-5a7b-4043-8da4-75cc3b6b2407","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"496978e9-c692-4197-8175-00f37f4031ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1ba46d03-340b-4531-a405-5835b5d19bdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"50c2656a-6647-4601-b510-385e26948392","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c9ce92b5-6fb9-45fe-9f23-db357a3184ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"452d0abf-059d-4c54-bc92-c3a100392790","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-541742\" primary control-plane node in \"insufficient-storage-541742\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8b29448d-2480-4190-b081-0470d8a03faa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1719413016-19142 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"566dcd2e-b8e1-49b8-ae23-f3aa42d5e24b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c4d22500-797d-4fdd-a8ce-f88aa7bcf6b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-541742 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-541742 --output=json --layout=cluster: exit status 7 (288.026828ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-541742","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-541742","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0701 14:58:36.730229 3854247 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-541742" does not appear in /home/jenkins/minikube-integration/19166-3708336/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-541742 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-541742 --output=json --layout=cluster: exit status 7 (297.268402ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-541742","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-541742","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0701 14:58:37.027988 3854305 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-541742" does not appear in /home/jenkins/minikube-integration/19166-3708336/kubeconfig
	E0701 14:58:37.040188 3854305 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/insufficient-storage-541742/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-541742" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-541742
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-541742: (1.909635195s)
--- PASS: TestInsufficientStorage (10.48s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (81.31s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1076254848 start -p running-upgrade-947594 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0701 15:03:19.347626 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1076254848 start -p running-upgrade-947594 --memory=2200 --vm-driver=docker  --container-runtime=crio: (40.966668664s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-947594 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0701 15:04:02.765936 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-947594 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.196642228s)
helpers_test.go:175: Cleaning up "running-upgrade-947594" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-947594
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-947594: (2.944567473s)
--- PASS: TestRunningBinaryUpgrade (81.31s)

                                                
                                    
x
+
TestKubernetesUpgrade (137.41s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-909384 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-909384 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m11.399549765s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-909384
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-909384: (1.274894798s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-909384 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-909384 status --format={{.Host}}: exit status 7 (105.86797ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-909384 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0701 15:02:05.816263 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-909384 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.816654608s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-909384 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-909384 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-909384 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (151.835508ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-909384] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19166-3708336/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-3708336/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-909384
	    minikube start -p kubernetes-upgrade-909384 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9093842 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.2, by running:
	    
	    minikube start -p kubernetes-upgrade-909384 --kubernetes-version=v1.30.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-909384 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-909384 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.613473233s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-909384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-909384
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-909384: (2.92217776s)
--- PASS: TestKubernetesUpgrade (137.41s)

                                                
                                    
x
+
TestMissingContainerUpgrade (151.84s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3111618363 start -p missing-upgrade-046700 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3111618363 start -p missing-upgrade-046700 --memory=2200 --driver=docker  --container-runtime=crio: (1m11.909996214s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-046700
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-046700: (10.407034182s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-046700
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-046700 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-046700 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m5.912162825s)
helpers_test.go:175: Cleaning up "missing-upgrade-046700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-046700
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-046700: (2.363068152s)
--- PASS: TestMissingContainerUpgrade (151.84s)

                                                
                                    
x
+
TestPause/serial/Start (89.88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-592733 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-592733 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m29.88058895s)
--- PASS: TestPause/serial/Start (89.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-973454 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-973454 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (91.565215ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-973454] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19166-3708336/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-3708336/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-973454 --driver=docker  --container-runtime=crio
E0701 14:59:02.765561 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-973454 --driver=docker  --container-runtime=crio: (43.219673297s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-973454 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-973454 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-973454 --no-kubernetes --driver=docker  --container-runtime=crio: (4.588591546s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-973454 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-973454 status -o json: exit status 2 (309.243218ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-973454","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-973454
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-973454: (1.984596454s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-973454 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-973454 --no-kubernetes --driver=docker  --container-runtime=crio: (7.657262372s)
--- PASS: TestNoKubernetes/serial/Start (7.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-973454 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-973454 "sudo systemctl is-active --quiet service kubelet": exit status 1 (264.476041ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-973454
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-973454: (1.212561375s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-973454 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-973454 --driver=docker  --container-runtime=crio: (7.820786887s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-973454 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-973454 "sudo systemctl is-active --quiet service kubelet": exit status 1 (271.238072ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.84s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-592733 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-592733 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.810727531s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.84s)

                                                
                                    
x
+
TestPause/serial/Pause (1.2s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-592733 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-592733 --alsologtostderr -v=5: (1.201249232s)
--- PASS: TestPause/serial/Pause (1.20s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-592733 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-592733 --output=json --layout=cluster: exit status 2 (381.522048ms)

                                                
                                                
-- stdout --
	{"Name":"pause-592733","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-592733","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.94s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-592733 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.94s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.28s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-592733 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-592733 --alsologtostderr -v=5: (1.281336157s)
--- PASS: TestPause/serial/PauseAgain (1.28s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.97s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-592733 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-592733 --alsologtostderr -v=5: (2.969327938s)
--- PASS: TestPause/serial/DeletePaused (2.97s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.5s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-592733
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-592733: exit status 1 (29.913169ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-592733: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (77.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.932920745 start -p stopped-upgrade-154114 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.932920745 start -p stopped-upgrade-154114 --memory=2200 --vm-driver=docker  --container-runtime=crio: (38.581999181s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.932920745 -p stopped-upgrade-154114 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.932920745 -p stopped-upgrade-154114 stop: (2.782469556s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-154114 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-154114 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.196718584s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (77.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-154114
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-154114: (1.425561929s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-637965 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-637965 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (314.143557ms)

                                                
                                                
-- stdout --
	* [false-637965] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19166
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19166-3708336/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-3708336/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0701 15:04:29.202714 3889550 out.go:291] Setting OutFile to fd 1 ...
	I0701 15:04:29.202989 3889550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 15:04:29.203004 3889550 out.go:304] Setting ErrFile to fd 2...
	I0701 15:04:29.203011 3889550 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0701 15:04:29.203432 3889550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19166-3708336/.minikube/bin
	I0701 15:04:29.206844 3889550 out.go:298] Setting JSON to false
	I0701 15:04:29.207951 3889550 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":168421,"bootTime":1719677849,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1063-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0701 15:04:29.208095 3889550 start.go:139] virtualization:  
	I0701 15:04:29.213085 3889550 out.go:177] * [false-637965] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0701 15:04:29.215805 3889550 out.go:177]   - MINIKUBE_LOCATION=19166
	I0701 15:04:29.215975 3889550 notify.go:220] Checking for updates...
	I0701 15:04:29.221962 3889550 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0701 15:04:29.225089 3889550 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19166-3708336/kubeconfig
	I0701 15:04:29.228338 3889550 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19166-3708336/.minikube
	I0701 15:04:29.231371 3889550 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0701 15:04:29.238214 3889550 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0701 15:04:29.245617 3889550 config.go:182] Loaded profile config "force-systemd-env-737034": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0701 15:04:29.245776 3889550 driver.go:392] Setting default libvirt URI to qemu:///system
	I0701 15:04:29.289560 3889550 docker.go:122] docker version: linux-27.0.3:Docker Engine - Community
	I0701 15:04:29.289704 3889550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0701 15:04:29.409787 3889550 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:43 SystemTime:2024-07-01 15:04:29.389820578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1063-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
	I0701 15:04:29.409898 3889550 docker.go:295] overlay module found
	I0701 15:04:29.418617 3889550 out.go:177] * Using the docker driver based on user configuration
	I0701 15:04:29.421592 3889550 start.go:297] selected driver: docker
	I0701 15:04:29.421907 3889550 start.go:901] validating driver "docker" against <nil>
	I0701 15:04:29.421924 3889550 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0701 15:04:29.427923 3889550 out.go:177] 
	W0701 15:04:29.430923 3889550 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0701 15:04:29.433558 3889550 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-637965 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-637965

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-637965

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-637965

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-637965

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-637965

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-637965

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-637965

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-637965

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-637965

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-637965

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-637965

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-637965" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-637965" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-637965

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-637965"

                                                
                                                
----------------------- debugLogs end: false-637965 [took: 4.633251084s] --------------------------------
helpers_test.go:175: Cleaning up "false-637965" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-637965
--- PASS: TestNetworkPlugins/group/false (5.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (169.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-474598 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0701 15:06:22.394709 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
E0701 15:08:19.348002 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-474598 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m49.210963877s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (169.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-474598 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [91e015d1-1afc-4016-8924-d4032065550c] Pending
helpers_test.go:344: "busybox" [91e015d1-1afc-4016-8924-d4032065550c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [91e015d1-1afc-4016-8924-d4032065550c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004013601s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-474598 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-474598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-474598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.24580462s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-474598 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-474598 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-474598 --alsologtostderr -v=3: (13.068625558s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-969646 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-969646 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2: (1m11.687737705s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-474598 -n old-k8s-version-474598
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-474598 -n old-k8s-version-474598: exit status 7 (96.494006ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-474598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-969646 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [29cc108e-e0f5-4d45-a72d-ff42b0137f3f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [29cc108e-e0f5-4d45-a72d-ff42b0137f3f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.008611204s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-969646 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-969646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-969646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.048535957s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-969646 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-969646 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-969646 --alsologtostderr -v=3: (12.025785713s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-969646 -n no-preload-969646
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-969646 -n no-preload-969646: exit status 7 (70.009439ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-969646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (271.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-969646 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2
E0701 15:13:19.348793 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
E0701 15:14:02.765467 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-969646 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2: (4m31.325761617s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-969646 -n no-preload-969646
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (271.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-jkgzh" [33140f6f-ec4c-488c-a19b-a4c950f5b504] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003895212s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-jkgzh" [33140f6f-ec4c-488c-a19b-a4c950f5b504] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004187846s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-969646 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-969646 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-969646 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-969646 --alsologtostderr -v=1: (1.048377792s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-969646 -n no-preload-969646
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-969646 -n no-preload-969646: exit status 2 (410.846258ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-969646 -n no-preload-969646
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-969646 -n no-preload-969646: exit status 2 (428.212531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-969646 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-969646 -n no-preload-969646
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-969646 -n no-preload-969646
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-207952 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-207952 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2: (1m25.278342329s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-wrff2" [a3a9db01-7db9-4d09-b5b6-3ebe8527f6f3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005240652s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-wrff2" [a3a9db01-7db9-4d09-b5b6-3ebe8527f6f3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005898043s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-474598 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-474598 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-474598 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-474598 -n old-k8s-version-474598
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-474598 -n old-k8s-version-474598: exit status 2 (388.466587ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-474598 -n old-k8s-version-474598
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-474598 -n old-k8s-version-474598: exit status 2 (422.94136ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-474598 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-474598 -n old-k8s-version-474598
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-474598 -n old-k8s-version-474598
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-069838 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-069838 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2: (1m21.283392801s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-207952 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4d7e8f5c-7c68-4339-8d08-ba78a0914e10] Pending
helpers_test.go:344: "busybox" [4d7e8f5c-7c68-4339-8d08-ba78a0914e10] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4d7e8f5c-7c68-4339-8d08-ba78a0914e10] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003866204s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-207952 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-207952 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-207952 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.006422111s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-207952 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-207952 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-207952 --alsologtostderr -v=3: (12.023214872s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-207952 -n embed-certs-207952
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-207952 -n embed-certs-207952: exit status 7 (73.310328ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-207952 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-207952 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-207952 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2: (4m27.40596621s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-207952 -n embed-certs-207952
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-069838 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b685bb9c-2ad4-46b0-8254-1ffcd42ea800] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b685bb9c-2ad4-46b0-8254-1ffcd42ea800] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003354793s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-069838 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-069838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-069838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.531873474s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-069838 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-069838 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-069838 --alsologtostderr -v=3: (12.257491235s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-069838 -n default-k8s-diff-port-069838
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-069838 -n default-k8s-diff-port-069838: exit status 7 (72.939216ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-069838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (303.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-069838 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2
E0701 15:18:19.347925 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
E0701 15:18:37.329379 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/client.crt: no such file or directory
E0701 15:18:37.334757 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/client.crt: no such file or directory
E0701 15:18:37.344990 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/client.crt: no such file or directory
E0701 15:18:37.365275 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/client.crt: no such file or directory
E0701 15:18:37.405596 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/client.crt: no such file or directory
E0701 15:18:37.485910 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/client.crt: no such file or directory
E0701 15:18:37.646162 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/client.crt: no such file or directory
E0701 15:18:37.966796 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/client.crt: no such file or directory
E0701 15:18:38.607610 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/client.crt: no such file or directory
E0701 15:18:39.888436 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/client.crt: no such file or directory
E0701 15:18:42.449570 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/client.crt: no such file or directory
E0701 15:18:45.816725 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
E0701 15:18:47.570235 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/client.crt: no such file or directory
E0701 15:18:57.811172 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/client.crt: no such file or directory
E0701 15:19:02.765475 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
E0701 15:19:18.291432 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/client.crt: no such file or directory
E0701 15:19:59.251653 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/client.crt: no such file or directory
E0701 15:20:00.684626 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/no-preload-969646/client.crt: no such file or directory
E0701 15:20:00.690507 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/no-preload-969646/client.crt: no such file or directory
E0701 15:20:00.700792 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/no-preload-969646/client.crt: no such file or directory
E0701 15:20:00.721136 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/no-preload-969646/client.crt: no such file or directory
E0701 15:20:00.761404 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/no-preload-969646/client.crt: no such file or directory
E0701 15:20:00.841694 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/no-preload-969646/client.crt: no such file or directory
E0701 15:20:01.003447 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/no-preload-969646/client.crt: no such file or directory
E0701 15:20:01.324009 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/no-preload-969646/client.crt: no such file or directory
E0701 15:20:01.964679 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/no-preload-969646/client.crt: no such file or directory
E0701 15:20:03.245336 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/no-preload-969646/client.crt: no such file or directory
E0701 15:20:05.805535 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/no-preload-969646/client.crt: no such file or directory
E0701 15:20:10.926245 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/no-preload-969646/client.crt: no such file or directory
E0701 15:20:21.166849 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/no-preload-969646/client.crt: no such file or directory
E0701 15:20:41.647080 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/no-preload-969646/client.crt: no such file or directory
E0701 15:21:21.172082 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/client.crt: no such file or directory
E0701 15:21:22.607635 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/no-preload-969646/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-069838 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2: (5m3.371654401s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-069838 -n default-k8s-diff-port-069838
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (303.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-dbc6c" [bc05c876-d09b-402f-afbc-c7515c973018] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003935091s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-dbc6c" [bc05c876-d09b-402f-afbc-c7515c973018] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003512419s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-207952 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-207952 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-207952 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-207952 -n embed-certs-207952
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-207952 -n embed-certs-207952: exit status 2 (324.22717ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-207952 -n embed-certs-207952
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-207952 -n embed-certs-207952: exit status 2 (333.970294ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-207952 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-207952 -n embed-certs-207952
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-207952 -n embed-certs-207952
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-977851 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-977851 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2: (45.533635636s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-snf7l" [6109279e-ec09-4076-893f-d9c6bd93c740] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005015398s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-977851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-977851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.075439676s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-977851 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-977851 --alsologtostderr -v=3: (1.244391453s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-977851 -n newest-cni-977851
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-977851 -n newest-cni-977851: exit status 7 (72.231002ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-977851 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (24.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-977851 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-977851 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.30.2: (23.796711054s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-977851 -n newest-cni-977851
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (24.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-snf7l" [6109279e-ec09-4076-893f-d9c6bd93c740] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003984698s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-069838 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-069838 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-069838 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-069838 --alsologtostderr -v=1: (1.358881181s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-069838 -n default-k8s-diff-port-069838
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-069838 -n default-k8s-diff-port-069838: exit status 2 (492.283397ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-069838 -n default-k8s-diff-port-069838
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-069838 -n default-k8s-diff-port-069838: exit status 2 (511.319869ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-069838 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-069838 --alsologtostderr -v=1: (1.145477233s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-069838 -n default-k8s-diff-port-069838
E0701 15:22:44.527881 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/no-preload-969646/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-069838 -n default-k8s-diff-port-069838
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-637965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-637965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m28.70271818s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-977851 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-977851 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-977851 -n newest-cni-977851
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-977851 -n newest-cni-977851: exit status 2 (313.56695ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-977851 -n newest-cni-977851
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-977851 -n newest-cni-977851: exit status 2 (322.371226ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-977851 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-977851 -n newest-cni-977851
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-977851 -n newest-cni-977851
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.01s)
E0701 15:29:02.765591 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
E0701 15:29:18.343856 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/auto-637965/client.crt: no such file or directory
E0701 15:29:18.349108 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/auto-637965/client.crt: no such file or directory
E0701 15:29:18.359360 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/auto-637965/client.crt: no such file or directory
E0701 15:29:18.379643 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/auto-637965/client.crt: no such file or directory
E0701 15:29:18.420045 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/auto-637965/client.crt: no such file or directory
E0701 15:29:18.500334 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/auto-637965/client.crt: no such file or directory
E0701 15:29:18.660800 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/auto-637965/client.crt: no such file or directory
E0701 15:29:18.981314 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/auto-637965/client.crt: no such file or directory
E0701 15:29:19.622261 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/auto-637965/client.crt: no such file or directory
E0701 15:29:20.902747 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/auto-637965/client.crt: no such file or directory
E0701 15:29:23.463893 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/auto-637965/client.crt: no such file or directory
E0701 15:29:25.788003 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/kindnet-637965/client.crt: no such file or directory
E0701 15:29:25.793334 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/kindnet-637965/client.crt: no such file or directory
E0701 15:29:25.803683 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/kindnet-637965/client.crt: no such file or directory
E0701 15:29:25.823981 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/kindnet-637965/client.crt: no such file or directory
E0701 15:29:25.864323 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/kindnet-637965/client.crt: no such file or directory
E0701 15:29:25.944708 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/kindnet-637965/client.crt: no such file or directory
E0701 15:29:26.105117 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/kindnet-637965/client.crt: no such file or directory
E0701 15:29:26.425736 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/kindnet-637965/client.crt: no such file or directory
E0701 15:29:27.066929 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/kindnet-637965/client.crt: no such file or directory
E0701 15:29:28.347284 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/kindnet-637965/client.crt: no such file or directory
E0701 15:29:28.584491 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/auto-637965/client.crt: no such file or directory
E0701 15:29:30.907752 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/kindnet-637965/client.crt: no such file or directory
E0701 15:29:36.028427 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/kindnet-637965/client.crt: no such file or directory
E0701 15:29:38.824731 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/auto-637965/client.crt: no such file or directory
E0701 15:29:44.776867 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/default-k8s-diff-port-069838/client.crt: no such file or directory
E0701 15:29:46.269006 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/kindnet-637965/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (82.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-637965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0701 15:23:19.348283 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/functional-373457/client.crt: no such file or directory
E0701 15:23:37.329475 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/client.crt: no such file or directory
E0701 15:24:02.765789 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/addons-929335/client.crt: no such file or directory
E0701 15:24:05.012403 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/old-k8s-version-474598/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-637965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m22.498515343s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (82.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-637965 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-637965 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7gn9q" [64f1276a-5d15-449e-9f66-2cdf69c630d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7gn9q" [64f1276a-5d15-449e-9f66-2cdf69c630d0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003796861s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-5v5jm" [298d276e-7f5c-4a39-9215-f1cfda93f1e9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004688628s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-637965 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-637965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-637965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-637965 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-637965 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-j65dg" [72163e57-a4d7-40eb-bb78-319ce71667bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-j65dg" [72163e57-a4d7-40eb-bb78-319ce71667bb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004111183s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-637965 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-637965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-637965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-637965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0701 15:25:00.684185 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/no-preload-969646/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-637965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m15.414027218s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-637965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0701 15:25:28.368524 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/no-preload-969646/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-637965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m11.666546874s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7kxmg" [e207053e-5f6b-4b2c-8226-20abfb6086a2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005062071s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-637965 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-637965 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-4psvp" [49f547fd-5a44-485c-ab03-24ce0fe4d4b1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-4psvp" [49f547fd-5a44-485c-ab03-24ce0fe4d4b1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.00466979s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-637965 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-637965 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-lzdlw" [bfc78a36-257b-4a82-8e4a-2aa8d701df05] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-lzdlw" [bfc78a36-257b-4a82-8e4a-2aa8d701df05] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003560612s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-637965 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-637965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-637965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-637965 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-637965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-637965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (56.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-637965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-637965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (56.407877436s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (56.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (75.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-637965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0701 15:27:00.933577 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/default-k8s-diff-port-069838/client.crt: no such file or directory
E0701 15:27:00.939530 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/default-k8s-diff-port-069838/client.crt: no such file or directory
E0701 15:27:00.949721 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/default-k8s-diff-port-069838/client.crt: no such file or directory
E0701 15:27:00.969982 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/default-k8s-diff-port-069838/client.crt: no such file or directory
E0701 15:27:01.010932 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/default-k8s-diff-port-069838/client.crt: no such file or directory
E0701 15:27:01.091489 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/default-k8s-diff-port-069838/client.crt: no such file or directory
E0701 15:27:01.251986 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/default-k8s-diff-port-069838/client.crt: no such file or directory
E0701 15:27:01.572396 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/default-k8s-diff-port-069838/client.crt: no such file or directory
E0701 15:27:02.212594 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/default-k8s-diff-port-069838/client.crt: no such file or directory
E0701 15:27:03.492805 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/default-k8s-diff-port-069838/client.crt: no such file or directory
E0701 15:27:06.053496 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/default-k8s-diff-port-069838/client.crt: no such file or directory
E0701 15:27:11.174151 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/default-k8s-diff-port-069838/client.crt: no such file or directory
E0701 15:27:21.414530 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/default-k8s-diff-port-069838/client.crt: no such file or directory
E0701 15:27:41.895050 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/default-k8s-diff-port-069838/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-637965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m15.079119639s)
--- PASS: TestNetworkPlugins/group/flannel/Start (75.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-637965 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-637965 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-pbgkl" [d9e1efa6-731b-48a4-93a4-473f720da3c9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-pbgkl" [d9e1efa6-731b-48a4-93a4-473f720da3c9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.003450412s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-637965 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-637965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-637965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-st6g2" [f9e63ffc-a1bc-40a0-b16e-23314938a1f3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005401456s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-637965 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-637965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-637965 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m28.884511889s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-637965 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-fsz24" [267d169f-a2d7-47f9-96e9-2662450f3495] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0701 15:28:22.855773 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/default-k8s-diff-port-069838/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-fsz24" [267d169f-a2d7-47f9-96e9-2662450f3495] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004990042s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-637965 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-637965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-637965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-637965 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-637965 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bzrn7" [fad5cd2e-ced5-4f35-8692-e802d08d9cd3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-bzrn7" [fad5cd2e-ced5-4f35-8692-e802d08d9cd3] Running
E0701 15:29:59.305690 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/auto-637965/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.013260774s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-637965 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-637965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-637965 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0701 15:30:00.684105 3713725 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/no-preload-969646/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (30/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-822470 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-822470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-822470
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-144516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-144516
--- SKIP: TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-637965 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-637965

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-637965

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-637965

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-637965

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-637965

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-637965

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-637965

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-637965

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-637965

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-637965

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-637965

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-637965" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-637965" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19166-3708336/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 01 Jul 2024 15:04:23 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-998986
contexts:
- context:
cluster: force-systemd-flag-998986
extensions:
- extension:
last-update: Mon, 01 Jul 2024 15:04:23 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: force-systemd-flag-998986
name: force-systemd-flag-998986
current-context: force-systemd-flag-998986
kind: Config
preferences: {}
users:
- name: force-systemd-flag-998986
user:
client-certificate: /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/force-systemd-flag-998986/client.crt
client-key: /home/jenkins/minikube-integration/19166-3708336/.minikube/profiles/force-systemd-flag-998986/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-637965

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-637965"

                                                
                                                
----------------------- debugLogs end: kubenet-637965 [took: 3.961268136s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-637965" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-637965
--- SKIP: TestNetworkPlugins/group/kubenet (4.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-637965 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-637965

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-637965

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-637965

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-637965

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-637965

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-637965

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-637965

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-637965

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-637965

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-637965

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-637965

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-637965" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-637965

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-637965

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-637965

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-637965

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-637965" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-637965" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-637965

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-637965" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-637965"

                                                
                                                
----------------------- debugLogs end: cilium-637965 [took: 5.175328769s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-637965" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-637965
--- SKIP: TestNetworkPlugins/group/cilium (5.37s)

                                                
                                    
Copied to clipboard