Test Report: Docker_Linux_crio_arm64 19690

                    
                      f8db61c9b74e1fc8d4208c01add19855c5953b45:2024-09-23:36339
                    
                

Test fail (4/327)

Order failed test Duration
33 TestAddons/parallel/Registry 73.85
34 TestAddons/parallel/Ingress 155.03
36 TestAddons/parallel/MetricsServer 329.97
173 TestMultiControlPlane/serial/RestartCluster 126.42
x
+
TestAddons/parallel/Registry (73.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 4.74868ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-2g5d2" [d093e650-6688-49f8-9c46-28a49dd5a974] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00335058s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pqtjc" [cb6ceb80-6e9e-4cb0-8229-2ffe7f03b5f8] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003794953s
addons_test.go:338: (dbg) Run:  kubectl --context addons-133262 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-133262 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-133262 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.124067299s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-133262 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-133262 ip
2024/09/23 13:37:17 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-133262 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-133262
helpers_test.go:235: (dbg) docker inspect addons-133262:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5025a3e562405ccbb5e57022efe59b9bbe643e70c019e4c06b37590b7afd6f95",
	        "Created": "2024-09-23T13:25:04.273986374Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2384322,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T13:25:04.39615577Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
	        "ResolvConfPath": "/var/lib/docker/containers/5025a3e562405ccbb5e57022efe59b9bbe643e70c019e4c06b37590b7afd6f95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5025a3e562405ccbb5e57022efe59b9bbe643e70c019e4c06b37590b7afd6f95/hostname",
	        "HostsPath": "/var/lib/docker/containers/5025a3e562405ccbb5e57022efe59b9bbe643e70c019e4c06b37590b7afd6f95/hosts",
	        "LogPath": "/var/lib/docker/containers/5025a3e562405ccbb5e57022efe59b9bbe643e70c019e4c06b37590b7afd6f95/5025a3e562405ccbb5e57022efe59b9bbe643e70c019e4c06b37590b7afd6f95-json.log",
	        "Name": "/addons-133262",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-133262:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-133262",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a338015ce0d4f39570960bfbc498e21bd3d77cc2352e2ecf45c7a1e6bf2501fc-init/diff:/var/lib/docker/overlay2/cb21b5e82393f0d5264c7db3ef721bc402a1fb078a3835cf5b3c87b0c534f7c3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a338015ce0d4f39570960bfbc498e21bd3d77cc2352e2ecf45c7a1e6bf2501fc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a338015ce0d4f39570960bfbc498e21bd3d77cc2352e2ecf45c7a1e6bf2501fc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a338015ce0d4f39570960bfbc498e21bd3d77cc2352e2ecf45c7a1e6bf2501fc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-133262",
	                "Source": "/var/lib/docker/volumes/addons-133262/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-133262",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-133262",
	                "name.minikube.sigs.k8s.io": "addons-133262",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1741029badc86a71140569cf0476e607610316c0823ed37e11befd21a27df5ad",
	            "SandboxKey": "/var/run/docker/netns/1741029badc8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35734"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35735"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35738"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35736"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35737"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-133262": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "32e42fc489c18023f59643e3f9c8a5aaca44c70cab10ea22839173b8efe7a5b0",
	                    "EndpointID": "f553c0425f96879275a6868c4915333e0a9bf18829e579f5bd5a87a9769b40ec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-133262",
	                        "5025a3e56240"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-133262 -n addons-133262
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-133262 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-133262 logs -n 25: (1.671820213s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-801108   | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC |                     |
	|         | -p download-only-801108                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	| delete  | -p download-only-801108                                                                     | download-only-801108   | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	| start   | -o=json --download-only                                                                     | download-only-496865   | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC |                     |
	|         | -p download-only-496865                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	| delete  | -p download-only-496865                                                                     | download-only-496865   | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	| delete  | -p download-only-801108                                                                     | download-only-801108   | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	| delete  | -p download-only-496865                                                                     | download-only-496865   | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	| start   | --download-only -p                                                                          | download-docker-237977 | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC |                     |
	|         | download-docker-237977                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-237977                                                                   | download-docker-237977 | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-127301   | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC |                     |
	|         | binary-mirror-127301                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:42465                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-127301                                                                     | binary-mirror-127301   | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	| addons  | enable dashboard -p                                                                         | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC |                     |
	|         | addons-133262                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC |                     |
	|         | addons-133262                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-133262 --wait=true                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:28 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-133262 addons disable                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:36 UTC | 23 Sep 24 13:36 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:36 UTC | 23 Sep 24 13:36 UTC |
	|         | -p addons-133262                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-133262 ssh cat                                                                       | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:36 UTC | 23 Sep 24 13:36 UTC |
	|         | /opt/local-path-provisioner/pvc-ba93c3ca-4ceb-4c2d-8d75-76b896b20b5e_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-133262 addons disable                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:36 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-133262 ip                                                                            | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:37 UTC | 23 Sep 24 13:37 UTC |
	| addons  | addons-133262 addons disable                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:37 UTC | 23 Sep 24 13:37 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:24:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:24:40.364478 2383828 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:24:40.364687 2383828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:24:40.364718 2383828 out.go:358] Setting ErrFile to fd 2...
	I0923 13:24:40.364739 2383828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:24:40.365007 2383828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-2377681/.minikube/bin
	I0923 13:24:40.365476 2383828 out.go:352] Setting JSON to false
	I0923 13:24:40.366420 2383828 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":54423,"bootTime":1727043457,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0923 13:24:40.366518 2383828 start.go:139] virtualization:  
	I0923 13:24:40.368697 2383828 out.go:177] * [addons-133262] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 13:24:40.370555 2383828 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:24:40.370655 2383828 notify.go:220] Checking for updates...
	I0923 13:24:40.373762 2383828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:24:40.375645 2383828 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-2377681/kubeconfig
	I0923 13:24:40.376840 2383828 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-2377681/.minikube
	I0923 13:24:40.378275 2383828 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 13:24:40.379541 2383828 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:24:40.380976 2383828 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:24:40.425606 2383828 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 13:24:40.425734 2383828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:24:40.478465 2383828 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 13:24:40.468583329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:24:40.478577 2383828 docker.go:318] overlay module found
	I0923 13:24:40.480220 2383828 out.go:177] * Using the docker driver based on user configuration
	I0923 13:24:40.481509 2383828 start.go:297] selected driver: docker
	I0923 13:24:40.481524 2383828 start.go:901] validating driver "docker" against <nil>
	I0923 13:24:40.481538 2383828 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:24:40.482184 2383828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:24:40.531533 2383828 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 13:24:40.521410022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:24:40.531752 2383828 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 13:24:40.531987 2383828 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:24:40.533357 2383828 out.go:177] * Using Docker driver with root privileges
	I0923 13:24:40.534774 2383828 cni.go:84] Creating CNI manager for ""
	I0923 13:24:40.534836 2383828 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 13:24:40.534848 2383828 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 13:24:40.534944 2383828 start.go:340] cluster config:
	{Name:addons-133262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-133262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:24:40.536381 2383828 out.go:177] * Starting "addons-133262" primary control-plane node in "addons-133262" cluster
	I0923 13:24:40.537851 2383828 cache.go:121] Beginning downloading kic base image for docker with crio
	I0923 13:24:40.539216 2383828 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 13:24:40.540387 2383828 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:24:40.540468 2383828 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0923 13:24:40.540480 2383828 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 13:24:40.540486 2383828 cache.go:56] Caching tarball of preloaded images
	I0923 13:24:40.540576 2383828 preload.go:172] Found /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0923 13:24:40.540587 2383828 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 13:24:40.540932 2383828 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/config.json ...
	I0923 13:24:40.540964 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/config.json: {Name:mk0f11192ff62aa19eaf7345f3142fd23df23f12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:24:40.557194 2383828 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 13:24:40.557302 2383828 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 13:24:40.557321 2383828 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 13:24:40.557327 2383828 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 13:24:40.557334 2383828 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 13:24:40.557340 2383828 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0923 13:24:57.517135 2383828 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0923 13:24:57.517177 2383828 cache.go:194] Successfully downloaded all kic artifacts
	I0923 13:24:57.517208 2383828 start.go:360] acquireMachinesLock for addons-133262: {Name:mkbc92a211fc9b19084838acda6ec6db74ac2de5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:24:57.517340 2383828 start.go:364] duration metric: took 100.034µs to acquireMachinesLock for "addons-133262"
	I0923 13:24:57.517372 2383828 start.go:93] Provisioning new machine with config: &{Name:addons-133262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-133262 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 13:24:57.517487 2383828 start.go:125] createHost starting for "" (driver="docker")
	I0923 13:24:57.519552 2383828 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0923 13:24:57.519788 2383828 start.go:159] libmachine.API.Create for "addons-133262" (driver="docker")
	I0923 13:24:57.519822 2383828 client.go:168] LocalClient.Create starting
	I0923 13:24:57.519927 2383828 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem
	I0923 13:24:57.928803 2383828 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem
	I0923 13:24:58.062903 2383828 cli_runner.go:164] Run: docker network inspect addons-133262 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 13:24:58.077185 2383828 cli_runner.go:211] docker network inspect addons-133262 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 13:24:58.077288 2383828 network_create.go:284] running [docker network inspect addons-133262] to gather additional debugging logs...
	I0923 13:24:58.077309 2383828 cli_runner.go:164] Run: docker network inspect addons-133262
	W0923 13:24:58.092464 2383828 cli_runner.go:211] docker network inspect addons-133262 returned with exit code 1
	I0923 13:24:58.092500 2383828 network_create.go:287] error running [docker network inspect addons-133262]: docker network inspect addons-133262: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-133262 not found
	I0923 13:24:58.092521 2383828 network_create.go:289] output of [docker network inspect addons-133262]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-133262 not found
	
	** /stderr **
	I0923 13:24:58.092643 2383828 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 13:24:58.108933 2383828 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001781250}
	I0923 13:24:58.108976 2383828 network_create.go:124] attempt to create docker network addons-133262 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0923 13:24:58.109032 2383828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-133262 addons-133262
	I0923 13:24:58.181902 2383828 network_create.go:108] docker network addons-133262 192.168.49.0/24 created
	I0923 13:24:58.181937 2383828 kic.go:121] calculated static IP "192.168.49.2" for the "addons-133262" container
	I0923 13:24:58.182008 2383828 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 13:24:58.195905 2383828 cli_runner.go:164] Run: docker volume create addons-133262 --label name.minikube.sigs.k8s.io=addons-133262 --label created_by.minikube.sigs.k8s.io=true
	I0923 13:24:58.210686 2383828 oci.go:103] Successfully created a docker volume addons-133262
	I0923 13:24:58.210778 2383828 cli_runner.go:164] Run: docker run --rm --name addons-133262-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-133262 --entrypoint /usr/bin/test -v addons-133262:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 13:25:00.216144 2383828 cli_runner.go:217] Completed: docker run --rm --name addons-133262-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-133262 --entrypoint /usr/bin/test -v addons-133262:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (2.005304311s)
	I0923 13:25:00.216182 2383828 oci.go:107] Successfully prepared a docker volume addons-133262
	I0923 13:25:00.216215 2383828 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:25:00.216236 2383828 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 13:25:00.216350 2383828 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-133262:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 13:25:04.208435 2383828 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-133262:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.992035598s)
	I0923 13:25:04.208472 2383828 kic.go:203] duration metric: took 3.992232385s to extract preloaded images to volume ...
	W0923 13:25:04.208630 2383828 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0923 13:25:04.208755 2383828 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 13:25:04.259929 2383828 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-133262 --name addons-133262 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-133262 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-133262 --network addons-133262 --ip 192.168.49.2 --volume addons-133262:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0923 13:25:04.567167 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Running}}
	I0923 13:25:04.589203 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:04.612088 2383828 cli_runner.go:164] Run: docker exec addons-133262 stat /var/lib/dpkg/alternatives/iptables
	I0923 13:25:04.695578 2383828 oci.go:144] the created container "addons-133262" has a running status.
	I0923 13:25:04.695609 2383828 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa...
	I0923 13:25:05.137525 2383828 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 13:25:05.169488 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:05.191833 2383828 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 13:25:05.191853 2383828 kic_runner.go:114] Args: [docker exec --privileged addons-133262 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 13:25:05.256602 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:05.283280 2383828 machine.go:93] provisionDockerMachine start ...
	I0923 13:25:05.283429 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:05.305554 2383828 main.go:141] libmachine: Using SSH client type: native
	I0923 13:25:05.305832 2383828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35734 <nil> <nil>}
	I0923 13:25:05.305849 2383828 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:25:05.485763 2383828 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-133262
	
	I0923 13:25:05.485787 2383828 ubuntu.go:169] provisioning hostname "addons-133262"
	I0923 13:25:05.485852 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:05.505809 2383828 main.go:141] libmachine: Using SSH client type: native
	I0923 13:25:05.506049 2383828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35734 <nil> <nil>}
	I0923 13:25:05.506062 2383828 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-133262 && echo "addons-133262" | sudo tee /etc/hostname
	I0923 13:25:05.661069 2383828 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-133262
	
	I0923 13:25:05.661155 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:05.688059 2383828 main.go:141] libmachine: Using SSH client type: native
	I0923 13:25:05.688338 2383828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35734 <nil> <nil>}
	I0923 13:25:05.688355 2383828 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-133262' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-133262/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-133262' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:25:05.822488 2383828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:25:05.822526 2383828 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19690-2377681/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-2377681/.minikube}
	I0923 13:25:05.822550 2383828 ubuntu.go:177] setting up certificates
	I0923 13:25:05.822561 2383828 provision.go:84] configureAuth start
	I0923 13:25:05.822632 2383828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-133262
	I0923 13:25:05.839354 2383828 provision.go:143] copyHostCerts
	I0923 13:25:05.839446 2383828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.pem (1078 bytes)
	I0923 13:25:05.839573 2383828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-2377681/.minikube/cert.pem (1123 bytes)
	I0923 13:25:05.839636 2383828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-2377681/.minikube/key.pem (1679 bytes)
	I0923 13:25:05.839689 2383828 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca-key.pem org=jenkins.addons-133262 san=[127.0.0.1 192.168.49.2 addons-133262 localhost minikube]
	I0923 13:25:06.495243 2383828 provision.go:177] copyRemoteCerts
	I0923 13:25:06.495317 2383828 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:25:06.495387 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:06.514794 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:06.612607 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 13:25:06.638504 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 13:25:06.663621 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 13:25:06.689379 2383828 provision.go:87] duration metric: took 866.80454ms to configureAuth
	I0923 13:25:06.689451 2383828 ubuntu.go:193] setting minikube options for container-runtime
	I0923 13:25:06.689667 2383828 config.go:182] Loaded profile config "addons-133262": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:25:06.689785 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:06.707118 2383828 main.go:141] libmachine: Using SSH client type: native
	I0923 13:25:06.707369 2383828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35734 <nil> <nil>}
	I0923 13:25:06.707392 2383828 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 13:25:06.938544 2383828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 13:25:06.938576 2383828 machine.go:96] duration metric: took 1.655268945s to provisionDockerMachine
	I0923 13:25:06.938587 2383828 client.go:171] duration metric: took 9.418759041s to LocalClient.Create
	I0923 13:25:06.938600 2383828 start.go:167] duration metric: took 9.418812767s to libmachine.API.Create "addons-133262"
	I0923 13:25:06.938608 2383828 start.go:293] postStartSetup for "addons-133262" (driver="docker")
	I0923 13:25:06.938620 2383828 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:25:06.938686 2383828 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:25:06.938731 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:06.956302 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:07.055692 2383828 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:25:07.058884 2383828 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 13:25:07.058918 2383828 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 13:25:07.058931 2383828 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 13:25:07.058938 2383828 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 13:25:07.058953 2383828 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-2377681/.minikube/addons for local assets ...
	I0923 13:25:07.059040 2383828 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-2377681/.minikube/files for local assets ...
	I0923 13:25:07.059075 2383828 start.go:296] duration metric: took 120.460907ms for postStartSetup
	I0923 13:25:07.059396 2383828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-133262
	I0923 13:25:07.076417 2383828 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/config.json ...
	I0923 13:25:07.076731 2383828 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:25:07.076792 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:07.093453 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:07.183072 2383828 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 13:25:07.187501 2383828 start.go:128] duration metric: took 9.669998429s to createHost
	I0923 13:25:07.187526 2383828 start.go:83] releasing machines lock for "addons-133262", held for 9.670170929s
	I0923 13:25:07.187597 2383828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-133262
	I0923 13:25:07.203630 2383828 ssh_runner.go:195] Run: cat /version.json
	I0923 13:25:07.203673 2383828 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 13:25:07.203683 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:07.203744 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:07.223131 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:07.234414 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:07.436803 2383828 ssh_runner.go:195] Run: systemctl --version
	I0923 13:25:07.441288 2383828 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 13:25:07.583937 2383828 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 13:25:07.588356 2383828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:25:07.611186 2383828 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0923 13:25:07.611279 2383828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:25:07.642594 2383828 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0923 13:25:07.642666 2383828 start.go:495] detecting cgroup driver to use...
	I0923 13:25:07.642718 2383828 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 13:25:07.642799 2383828 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 13:25:07.659158 2383828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:25:07.670791 2383828 docker.go:217] disabling cri-docker service (if available) ...
	I0923 13:25:07.670915 2383828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 13:25:07.685963 2383828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 13:25:07.700410 2383828 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 13:25:07.793728 2383828 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 13:25:07.888156 2383828 docker.go:233] disabling docker service ...
	I0923 13:25:07.888238 2383828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 13:25:07.908488 2383828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 13:25:07.920988 2383828 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 13:25:08.011802 2383828 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 13:25:08.116061 2383828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 13:25:08.127456 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:25:08.144788 2383828 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 13:25:08.144859 2383828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.155741 2383828 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 13:25:08.155815 2383828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.166342 2383828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.176318 2383828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.186297 2383828 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:25:08.195794 2383828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.205821 2383828 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.222517 2383828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.232461 2383828 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:25:08.241712 2383828 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:25:08.250384 2383828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:25:08.337916 2383828 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 13:25:08.443675 2383828 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 13:25:08.443763 2383828 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 13:25:08.447871 2383828 start.go:563] Will wait 60s for crictl version
	I0923 13:25:08.447976 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:25:08.451632 2383828 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:25:08.495719 2383828 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0923 13:25:08.495829 2383828 ssh_runner.go:195] Run: crio --version
	I0923 13:25:08.534184 2383828 ssh_runner.go:195] Run: crio --version
	I0923 13:25:08.574119 2383828 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0923 13:25:08.575986 2383828 cli_runner.go:164] Run: docker network inspect addons-133262 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 13:25:08.591880 2383828 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 13:25:08.595405 2383828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:25:08.606218 2383828 kubeadm.go:883] updating cluster {Name:addons-133262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-133262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 13:25:08.606418 2383828 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:25:08.606486 2383828 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:25:08.683043 2383828 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 13:25:08.683069 2383828 crio.go:433] Images already preloaded, skipping extraction
	I0923 13:25:08.683126 2383828 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:25:08.718285 2383828 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 13:25:08.718324 2383828 cache_images.go:84] Images are preloaded, skipping loading
	I0923 13:25:08.718333 2383828 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0923 13:25:08.718438 2383828 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-133262 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-133262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:25:08.718527 2383828 ssh_runner.go:195] Run: crio config
	I0923 13:25:08.764315 2383828 cni.go:84] Creating CNI manager for ""
	I0923 13:25:08.764337 2383828 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 13:25:08.764348 2383828 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 13:25:08.764370 2383828 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-133262 NodeName:addons-133262 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 13:25:08.764526 2383828 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-133262"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 13:25:08.764603 2383828 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:25:08.773406 2383828 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:25:08.773479 2383828 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 13:25:08.782241 2383828 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0923 13:25:08.800013 2383828 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:25:08.818404 2383828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0923 13:25:08.836149 2383828 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 13:25:08.839708 2383828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:25:08.850762 2383828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:25:08.932670 2383828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:25:08.946645 2383828 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262 for IP: 192.168.49.2
	I0923 13:25:08.946664 2383828 certs.go:194] generating shared ca certs ...
	I0923 13:25:08.946681 2383828 certs.go:226] acquiring lock for ca certs: {Name:mka74fca5f9586bfec26165232a0abe6b9527b85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:08.946856 2383828 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key
	I0923 13:25:09.534535 2383828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt ...
	I0923 13:25:09.534569 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt: {Name:mkd6669f44b9a5690ab69d1191d9d59bfa475998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:09.534806 2383828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key ...
	I0923 13:25:09.534822 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key: {Name:mkcb9f518a9706e806f1e3ce2b21f17dd1ea4af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:09.535463 2383828 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key
	I0923 13:25:09.881577 2383828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.crt ...
	I0923 13:25:09.881615 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.crt: {Name:mkfe3b6cdbf84ec160efdee677ace7ad97157d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:09.881813 2383828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key ...
	I0923 13:25:09.881828 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key: {Name:mkfb51a840155a14a8cc8bb45048279f9c0b2777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:09.881912 2383828 certs.go:256] generating profile certs ...
	I0923 13:25:09.882006 2383828 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.key
	I0923 13:25:09.882034 2383828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt with IP's: []
	I0923 13:25:10.566644 2383828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt ...
	I0923 13:25:10.566674 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: {Name:mkd81ca15f11b2786974e7876e3c9aed3e2d4234 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:10.567469 2383828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.key ...
	I0923 13:25:10.567490 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.key: {Name:mk6021386003345160ab870bf118db0d5b101e3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:10.567623 2383828 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.key.5c5d0912
	I0923 13:25:10.567648 2383828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.crt.5c5d0912 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0923 13:25:10.852497 2383828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.crt.5c5d0912 ...
	I0923 13:25:10.852533 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.crt.5c5d0912: {Name:mk7f27ae99622d8c8fa852d7ef4a1bd4d1377cc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:10.853247 2383828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.key.5c5d0912 ...
	I0923 13:25:10.853270 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.key.5c5d0912: {Name:mke5687c64d611e598a2d4dfa2e1b457cefad09e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:10.853768 2383828 certs.go:381] copying /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.crt.5c5d0912 -> /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.crt
	I0923 13:25:10.853857 2383828 certs.go:385] copying /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.key.5c5d0912 -> /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.key
	I0923 13:25:10.853920 2383828 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.key
	I0923 13:25:10.853944 2383828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.crt with IP's: []
	I0923 13:25:11.253287 2383828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.crt ...
	I0923 13:25:11.253320 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.crt: {Name:mkec361222a939c4fff7d39836686e89c78445d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:11.253510 2383828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.key ...
	I0923 13:25:11.253524 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.key: {Name:mkd82ad2e44c4406a63509e86866460eeda368df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:11.253710 2383828 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 13:25:11.253753 2383828 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem (1078 bytes)
	I0923 13:25:11.253784 2383828 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem (1123 bytes)
	I0923 13:25:11.253812 2383828 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem (1679 bytes)
	I0923 13:25:11.254465 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:25:11.280459 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 13:25:11.308504 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:25:11.341407 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:25:11.365448 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 13:25:11.390204 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 13:25:11.414590 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 13:25:11.439501 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 13:25:11.463335 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:25:11.488243 2383828 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 13:25:11.506147 2383828 ssh_runner.go:195] Run: openssl version
	I0923 13:25:11.511692 2383828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:25:11.521261 2383828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:25:11.524826 2383828 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 13:25 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:25:11.524943 2383828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:25:11.532134 2383828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:25:11.541360 2383828 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:25:11.544583 2383828 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 13:25:11.544637 2383828 kubeadm.go:392] StartCluster: {Name:addons-133262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-133262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:25:11.544720 2383828 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 13:25:11.544790 2383828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 13:25:11.581094 2383828 cri.go:89] found id: ""
	I0923 13:25:11.581187 2383828 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 13:25:11.590237 2383828 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 13:25:11.599295 2383828 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0923 13:25:11.599391 2383828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 13:25:11.608400 2383828 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 13:25:11.608424 2383828 kubeadm.go:157] found existing configuration files:
	
	I0923 13:25:11.608478 2383828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 13:25:11.617384 2383828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 13:25:11.617458 2383828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 13:25:11.626442 2383828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 13:25:11.635222 2383828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 13:25:11.635294 2383828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 13:25:11.643984 2383828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 13:25:11.653034 2383828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 13:25:11.653121 2383828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 13:25:11.661943 2383828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 13:25:11.670520 2383828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 13:25:11.670582 2383828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 13:25:11.678902 2383828 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0923 13:25:11.719171 2383828 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 13:25:11.719491 2383828 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 13:25:11.740162 2383828 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0923 13:25:11.740239 2383828 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0923 13:25:11.740288 2383828 kubeadm.go:310] OS: Linux
	I0923 13:25:11.740344 2383828 kubeadm.go:310] CGROUPS_CPU: enabled
	I0923 13:25:11.740396 2383828 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0923 13:25:11.740445 2383828 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0923 13:25:11.740496 2383828 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0923 13:25:11.740549 2383828 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0923 13:25:11.740599 2383828 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0923 13:25:11.740647 2383828 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0923 13:25:11.740698 2383828 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0923 13:25:11.740747 2383828 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0923 13:25:11.804353 2383828 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 13:25:11.804468 2383828 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 13:25:11.804565 2383828 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 13:25:11.811498 2383828 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 13:25:11.813835 2383828 out.go:235]   - Generating certificates and keys ...
	I0923 13:25:11.814031 2383828 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 13:25:11.814147 2383828 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 13:25:12.062735 2383828 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 13:25:12.591731 2383828 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 13:25:13.268376 2383828 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 13:25:13.777588 2383828 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 13:25:14.367839 2383828 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 13:25:14.368150 2383828 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-133262 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 13:25:14.571927 2383828 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 13:25:14.572261 2383828 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-133262 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 13:25:14.938024 2383828 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 13:25:15.818972 2383828 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 13:25:16.397788 2383828 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 13:25:16.398106 2383828 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 13:25:16.811849 2383828 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 13:25:17.440724 2383828 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 13:25:18.228845 2383828 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 13:25:18.373394 2383828 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 13:25:18.887331 2383828 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 13:25:18.888146 2383828 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 13:25:18.891236 2383828 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 13:25:18.893066 2383828 out.go:235]   - Booting up control plane ...
	I0923 13:25:18.893163 2383828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 13:25:18.893238 2383828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 13:25:18.894026 2383828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 13:25:18.904186 2383828 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 13:25:18.910454 2383828 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 13:25:18.910511 2383828 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 13:25:19.004454 2383828 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 13:25:19.004576 2383828 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 13:25:20.505668 2383828 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501072601s
	I0923 13:25:20.505759 2383828 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 13:25:26.007712 2383828 kubeadm.go:310] [api-check] The API server is healthy after 5.502311988s
	I0923 13:25:26.031158 2383828 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 13:25:26.046565 2383828 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 13:25:26.076539 2383828 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 13:25:26.076736 2383828 kubeadm.go:310] [mark-control-plane] Marking the node addons-133262 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 13:25:26.087778 2383828 kubeadm.go:310] [bootstrap-token] Using token: kkrgrl.3o8iief7llcjzdwt
	I0923 13:25:26.090470 2383828 out.go:235]   - Configuring RBAC rules ...
	I0923 13:25:26.090609 2383828 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 13:25:26.096407 2383828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 13:25:26.106960 2383828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 13:25:26.110745 2383828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 13:25:26.114782 2383828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 13:25:26.119709 2383828 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 13:25:26.414947 2383828 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 13:25:26.846545 2383828 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 13:25:27.414986 2383828 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 13:25:27.416208 2383828 kubeadm.go:310] 
	I0923 13:25:27.416286 2383828 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 13:25:27.416296 2383828 kubeadm.go:310] 
	I0923 13:25:27.416373 2383828 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 13:25:27.416383 2383828 kubeadm.go:310] 
	I0923 13:25:27.416408 2383828 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 13:25:27.416469 2383828 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 13:25:27.416523 2383828 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 13:25:27.416531 2383828 kubeadm.go:310] 
	I0923 13:25:27.416593 2383828 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 13:25:27.416602 2383828 kubeadm.go:310] 
	I0923 13:25:27.416649 2383828 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 13:25:27.416657 2383828 kubeadm.go:310] 
	I0923 13:25:27.416707 2383828 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 13:25:27.416784 2383828 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 13:25:27.416855 2383828 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 13:25:27.416864 2383828 kubeadm.go:310] 
	I0923 13:25:27.416947 2383828 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 13:25:27.417026 2383828 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 13:25:27.417034 2383828 kubeadm.go:310] 
	I0923 13:25:27.417117 2383828 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kkrgrl.3o8iief7llcjzdwt \
	I0923 13:25:27.417221 2383828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc25ddfa50091362c7bfdbe09ed12c0b94b944390ba1bf979075d78a22051d17 \
	I0923 13:25:27.417246 2383828 kubeadm.go:310] 	--control-plane 
	I0923 13:25:27.417251 2383828 kubeadm.go:310] 
	I0923 13:25:27.417334 2383828 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 13:25:27.417344 2383828 kubeadm.go:310] 
	I0923 13:25:27.417424 2383828 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kkrgrl.3o8iief7llcjzdwt \
	I0923 13:25:27.417529 2383828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc25ddfa50091362c7bfdbe09ed12c0b94b944390ba1bf979075d78a22051d17 
	I0923 13:25:27.421442 2383828 kubeadm.go:310] W0923 13:25:11.715767    1187 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:25:27.421763 2383828 kubeadm.go:310] W0923 13:25:11.716771    1187 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:25:27.421999 2383828 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0923 13:25:27.422114 2383828 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 13:25:27.422211 2383828 cni.go:84] Creating CNI manager for ""
	I0923 13:25:27.422223 2383828 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 13:25:27.424992 2383828 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 13:25:27.427655 2383828 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 13:25:27.434913 2383828 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 13:25:27.434938 2383828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 13:25:27.453393 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 13:25:27.737776 2383828 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 13:25:27.737920 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:27.738003 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-133262 minikube.k8s.io/updated_at=2024_09_23T13_25_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=addons-133262 minikube.k8s.io/primary=true
	I0923 13:25:27.874905 2383828 ops.go:34] apiserver oom_adj: -16
	I0923 13:25:27.875025 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:28.375543 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:28.875477 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:29.375599 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:29.875835 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:30.375081 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:30.876003 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:31.375136 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:31.875141 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:31.987664 2383828 kubeadm.go:1113] duration metric: took 4.24984179s to wait for elevateKubeSystemPrivileges
	I0923 13:25:31.987703 2383828 kubeadm.go:394] duration metric: took 20.443068903s to StartCluster
	I0923 13:25:31.987722 2383828 settings.go:142] acquiring lock: {Name:mkec0ac22c7afe2712cd8676389ce937f473d18b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:31.987847 2383828 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19690-2377681/kubeconfig
	I0923 13:25:31.988235 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/kubeconfig: {Name:mk1c3c49c69db07ab1c6462bef79c6f07c9c4b4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:31.988441 2383828 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 13:25:31.988585 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 13:25:31.988829 2383828 config.go:182] Loaded profile config "addons-133262": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:25:31.988864 2383828 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 13:25:31.988948 2383828 addons.go:69] Setting yakd=true in profile "addons-133262"
	I0923 13:25:31.988966 2383828 addons.go:234] Setting addon yakd=true in "addons-133262"
	I0923 13:25:31.988994 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:31.989510 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.989971 2383828 addons.go:69] Setting cloud-spanner=true in profile "addons-133262"
	I0923 13:25:31.989993 2383828 addons.go:234] Setting addon cloud-spanner=true in "addons-133262"
	I0923 13:25:31.990019 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:31.990092 2383828 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-133262"
	I0923 13:25:31.990110 2383828 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-133262"
	I0923 13:25:31.990135 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:31.990504 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.990578 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.994496 2383828 addons.go:69] Setting registry=true in profile "addons-133262"
	I0923 13:25:31.994563 2383828 addons.go:234] Setting addon registry=true in "addons-133262"
	I0923 13:25:31.994616 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:31.995146 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.995279 2383828 addons.go:69] Setting storage-provisioner=true in profile "addons-133262"
	I0923 13:25:31.996290 2383828 addons.go:234] Setting addon storage-provisioner=true in "addons-133262"
	I0923 13:25:31.996326 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:31.996775 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.999264 2383828 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-133262"
	I0923 13:25:31.999371 2383828 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-133262"
	I0923 13:25:31.999947 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.005578 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.999500 2383828 addons.go:69] Setting default-storageclass=true in profile "addons-133262"
	I0923 13:25:32.007965 2383828 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-133262"
	I0923 13:25:32.008523 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.995298 2383828 addons.go:69] Setting volcano=true in profile "addons-133262"
	I0923 13:25:32.012948 2383828 addons.go:234] Setting addon volcano=true in "addons-133262"
	I0923 13:25:32.013004 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.013497 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.995305 2383828 addons.go:69] Setting volumesnapshots=true in profile "addons-133262"
	I0923 13:25:32.028502 2383828 addons.go:234] Setting addon volumesnapshots=true in "addons-133262"
	I0923 13:25:32.028573 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.029136 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.999508 2383828 addons.go:69] Setting gcp-auth=true in profile "addons-133262"
	I0923 13:25:32.052306 2383828 mustload.go:65] Loading cluster: addons-133262
	I0923 13:25:32.052517 2383828 config.go:182] Loaded profile config "addons-133262": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:25:32.052788 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.999514 2383828 addons.go:69] Setting ingress=true in profile "addons-133262"
	I0923 13:25:32.070953 2383828 addons.go:234] Setting addon ingress=true in "addons-133262"
	I0923 13:25:32.071007 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.071476 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.999518 2383828 addons.go:69] Setting ingress-dns=true in profile "addons-133262"
	I0923 13:25:32.092899 2383828 addons.go:234] Setting addon ingress-dns=true in "addons-133262"
	I0923 13:25:32.092953 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.093443 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:32.109354 2383828 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 13:25:32.114598 2383828 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 13:25:32.114680 2383828 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 13:25:32.114765 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:31.999521 2383828 addons.go:69] Setting inspektor-gadget=true in profile "addons-133262"
	I0923 13:25:32.118436 2383828 addons.go:234] Setting addon inspektor-gadget=true in "addons-133262"
	I0923 13:25:32.118545 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:31.999525 2383828 addons.go:69] Setting metrics-server=true in profile "addons-133262"
	I0923 13:25:32.121769 2383828 addons.go:234] Setting addon metrics-server=true in "addons-133262"
	I0923 13:25:32.121814 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.122333 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.999534 2383828 out.go:177] * Verifying Kubernetes components...
	I0923 13:25:32.135133 2383828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:25:31.995291 2383828 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-133262"
	I0923 13:25:32.135493 2383828 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-133262"
	I0923 13:25:32.135848 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:32.160968 2383828 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 13:25:32.165055 2383828 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 13:25:32.165077 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 13:25:32.165152 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.202702 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 13:25:32.205673 2383828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 13:25:32.205756 2383828 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 13:25:32.205880 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.211247 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 13:25:32.214845 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 13:25:32.219909 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 13:25:32.222637 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 13:25:32.242770 2383828 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 13:25:32.250532 2383828 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 13:25:32.250557 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 13:25:32.250646 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.262838 2383828 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 13:25:32.263472 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	W0923 13:25:32.266501 2383828 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0923 13:25:32.277288 2383828 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 13:25:32.277562 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 13:25:32.277651 2383828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 13:25:32.279781 2383828 addons.go:234] Setting addon default-storageclass=true in "addons-133262"
	I0923 13:25:32.279819 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.282986 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:32.285904 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 13:25:32.289027 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 13:25:32.289071 2383828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 13:25:32.289082 2383828 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 13:25:32.289194 2383828 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 13:25:32.299635 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 13:25:32.299715 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.317274 2383828 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 13:25:32.317462 2383828 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-133262"
	I0923 13:25:32.317498 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.317931 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:32.318077 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 13:25:32.319694 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.325032 2383828 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 13:25:32.325087 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 13:25:32.325170 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.359357 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 13:25:32.361120 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 13:25:32.361147 2383828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 13:25:32.361222 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.361396 2383828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 13:25:32.365605 2383828 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 13:25:32.365642 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 13:25:32.365710 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.398727 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.402381 2383828 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 13:25:32.402408 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 13:25:32.402473 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.417359 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.428109 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.436811 2383828 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 13:25:32.439528 2383828 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 13:25:32.439555 2383828 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 13:25:32.439632 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.506455 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.521998 2383828 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 13:25:32.525970 2383828 out.go:177]   - Using image docker.io/busybox:stable
	I0923 13:25:32.529488 2383828 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 13:25:32.529517 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 13:25:32.529582 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.529774 2383828 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 13:25:32.532730 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 13:25:32.532755 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 13:25:32.532824 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.540460 2383828 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 13:25:32.540480 2383828 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 13:25:32.540539 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.540761 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.544187 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.566704 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.583136 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.588518 2383828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:25:32.619886 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.656566 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.657174 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.665769 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.672132 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	W0923 13:25:32.672854 2383828 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 13:25:32.672878 2383828 retry.go:31] will retry after 251.380216ms: ssh: handshake failed: EOF
	I0923 13:25:32.834603 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 13:25:32.952650 2383828 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 13:25:32.952726 2383828 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 13:25:32.958869 2383828 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 13:25:32.958945 2383828 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 13:25:32.981074 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 13:25:33.003665 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 13:25:33.003753 2383828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 13:25:33.020302 2383828 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 13:25:33.020395 2383828 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 13:25:33.064932 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 13:25:33.071188 2383828 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 13:25:33.071266 2383828 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 13:25:33.093040 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 13:25:33.096895 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 13:25:33.116925 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 13:25:33.118205 2383828 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 13:25:33.118262 2383828 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 13:25:33.127649 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 13:25:33.151493 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 13:25:33.151517 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 13:25:33.173138 2383828 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 13:25:33.173162 2383828 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 13:25:33.186149 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 13:25:33.186172 2383828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 13:25:33.202184 2383828 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 13:25:33.202204 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 13:25:33.247710 2383828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 13:25:33.247785 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 13:25:33.273067 2383828 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 13:25:33.273142 2383828 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 13:25:33.288177 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 13:25:33.288259 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 13:25:33.305420 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 13:25:33.305494 2383828 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 13:25:33.353173 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 13:25:33.368265 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 13:25:33.368342 2383828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 13:25:33.437059 2383828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 13:25:33.437132 2383828 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 13:25:33.440876 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 13:25:33.440949 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 13:25:33.449345 2383828 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 13:25:33.449418 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 13:25:33.473562 2383828 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 13:25:33.473637 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 13:25:33.523594 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 13:25:33.523675 2383828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 13:25:33.583312 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 13:25:33.613866 2383828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 13:25:33.613944 2383828 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 13:25:33.617882 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 13:25:33.617946 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 13:25:33.652467 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 13:25:33.681957 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 13:25:33.682035 2383828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 13:25:33.690387 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 13:25:33.710624 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 13:25:33.710702 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 13:25:33.780743 2383828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 13:25:33.780817 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 13:25:33.815507 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 13:25:33.815588 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 13:25:33.857088 2383828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 13:25:33.857166 2383828 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 13:25:33.918017 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 13:25:33.918092 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 13:25:33.929357 2383828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 13:25:33.929432 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 13:25:33.979747 2383828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 13:25:33.979822 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 13:25:33.983608 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 13:25:34.037007 2383828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 13:25:34.037089 2383828 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 13:25:34.151213 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 13:25:35.781487 2383828 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.192933328s)
	I0923 13:25:35.782545 2383828 node_ready.go:35] waiting up to 6m0s for node "addons-133262" to be "Ready" ...
	I0923 13:25:35.782865 2383828 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.496873728s)
	I0923 13:25:35.782924 2383828 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0923 13:25:36.428913 2383828 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-133262" context rescaled to 1 replicas
	I0923 13:25:36.462382 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.627742097s)
	I0923 13:25:37.802089 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:38.409822 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.428662395s)
	I0923 13:25:38.409900 2383828 addons.go:475] Verifying addon ingress=true in "addons-133262"
	I0923 13:25:38.410127 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.345169486s)
	I0923 13:25:38.410241 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.317116518s)
	I0923 13:25:38.410368 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.293369763s)
	I0923 13:25:38.410583 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.282860168s)
	I0923 13:25:38.410697 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.057460055s)
	I0923 13:25:38.410709 2383828 addons.go:475] Verifying addon registry=true in "addons-133262"
	I0923 13:25:38.410817 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.313378103s)
	I0923 13:25:38.410987 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.827495446s)
	I0923 13:25:38.411193 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.758634316s)
	W0923 13:25:38.412175 2383828 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 13:25:38.412202 2383828 retry.go:31] will retry after 192.996519ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 13:25:38.411249 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.720791712s)
	I0923 13:25:38.412240 2383828 addons.go:475] Verifying addon metrics-server=true in "addons-133262"
	I0923 13:25:38.411301 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.42762092s)
	I0923 13:25:38.413014 2383828 out.go:177] * Verifying ingress addon...
	I0923 13:25:38.413041 2383828 out.go:177] * Verifying registry addon...
	I0923 13:25:38.414885 2383828 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-133262 service yakd-dashboard -n yakd-dashboard
	
	I0923 13:25:38.419102 2383828 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 13:25:38.419832 2383828 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0923 13:25:38.460388 2383828 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0923 13:25:38.463085 2383828 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 13:25:38.463119 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:38.463335 2383828 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 13:25:38.463348 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:38.606142 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 13:25:39.005138 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:39.021026 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:39.118283 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.966973654s)
	I0923 13:25:39.118406 2383828 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-133262"
	I0923 13:25:39.121268 2383828 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 13:25:39.124770 2383828 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 13:25:39.156704 2383828 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 13:25:39.156770 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:39.439937 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:39.444350 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:39.640632 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:39.925058 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:39.925531 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:39.971775 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.365584799s)
	I0923 13:25:40.129039 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:40.286956 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:40.425951 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:40.427311 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:40.630301 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:40.924856 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:40.925869 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:41.129822 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:41.425406 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:41.425833 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:41.629752 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:41.926255 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:41.927436 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:42.132576 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:42.424685 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:42.424871 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:42.635312 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:42.637181 2383828 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 13:25:42.637349 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:42.660570 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:42.775194 2383828 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 13:25:42.787736 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:42.799009 2383828 addons.go:234] Setting addon gcp-auth=true in "addons-133262"
	I0923 13:25:42.799068 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:42.799666 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:42.819110 2383828 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 13:25:42.819169 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:42.837017 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:42.928031 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:42.928785 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:42.943532 2383828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 13:25:42.946272 2383828 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 13:25:42.948939 2383828 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 13:25:42.948964 2383828 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 13:25:42.967771 2383828 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 13:25:42.967799 2383828 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 13:25:42.986757 2383828 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 13:25:42.986781 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 13:25:43.007805 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 13:25:43.133188 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:43.440942 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:43.448247 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:43.598168 2383828 addons.go:475] Verifying addon gcp-auth=true in "addons-133262"
	I0923 13:25:43.600804 2383828 out.go:177] * Verifying gcp-auth addon...
	I0923 13:25:43.604541 2383828 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 13:25:43.614384 2383828 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 13:25:43.614417 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:43.714958 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:43.927260 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:43.928296 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:44.108166 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:44.129766 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:44.425907 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:44.428947 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:44.608989 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:44.629165 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:44.924442 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:44.924911 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:45.109621 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:45.134831 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:45.286174 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:45.423899 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:45.424213 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:45.608699 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:45.630848 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:45.923717 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:45.924806 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:46.108554 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:46.134002 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:46.423457 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:46.423949 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:46.607763 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:46.628666 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:46.923946 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:46.924341 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:47.108334 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:47.128936 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:47.424042 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:47.425089 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:47.608593 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:47.628453 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:47.786101 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:47.924546 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:47.925407 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:48.107567 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:48.129020 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:48.424760 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:48.425682 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:48.607946 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:48.629119 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:48.923346 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:48.924113 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:49.107465 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:49.128820 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:49.423331 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:49.424397 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:49.609143 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:49.628320 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:49.786566 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:49.924514 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:49.924812 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:50.108212 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:50.128656 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:50.423917 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:50.426088 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:50.607776 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:50.627970 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:50.923145 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:50.923993 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:51.108698 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:51.129331 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:51.424158 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:51.424921 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:51.607952 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:51.628227 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:51.923369 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:51.924228 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:52.107969 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:52.129521 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:52.286509 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:52.424000 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:52.424964 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:52.608383 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:52.628653 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:52.924655 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:52.925393 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:53.108542 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:53.129828 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:53.424003 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:53.424995 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:53.608550 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:53.629037 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:53.923760 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:53.924375 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:54.108444 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:54.128575 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:54.424136 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:54.424452 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:54.608327 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:54.628015 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:54.786192 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:54.924376 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:54.925390 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:55.108016 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:55.129009 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:55.424425 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:55.424771 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:55.608044 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:55.628274 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:55.924611 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:55.925486 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:56.108074 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:56.128941 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:56.423602 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:56.424012 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:56.607850 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:56.628723 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:56.786783 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:56.923439 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:56.924812 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:57.108314 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:57.128666 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:57.423521 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:57.424105 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:57.607781 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:57.628448 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:57.925107 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:57.926033 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:58.108563 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:58.128779 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:58.423804 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:58.424364 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:58.607922 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:58.628710 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:58.923609 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:58.924488 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:59.107804 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:59.128570 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:59.285918 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:59.424134 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:59.424388 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:59.607622 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:59.628423 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:59.923463 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:59.925039 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:00.109338 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:00.130159 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:00.423701 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:00.424594 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:00.607942 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:00.629162 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:00.924187 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:00.924521 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:01.114269 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:01.132902 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:01.286067 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:01.424165 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:01.424989 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:01.608584 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:01.628626 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:01.924141 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:01.925153 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:02.109683 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:02.129574 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:02.425376 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:02.427118 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:02.608407 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:02.628353 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:02.927784 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:02.929710 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:03.108803 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:03.128259 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:03.286985 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:03.423996 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:03.425124 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:03.607627 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:03.628770 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:03.924209 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:03.925264 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:04.107519 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:04.128969 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:04.424199 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:04.425254 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:04.607825 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:04.628956 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:04.923561 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:04.924425 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:05.108488 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:05.129008 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:05.422992 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:05.424323 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:05.607517 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:05.629014 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:05.786404 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:05.924275 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:05.925412 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:06.108780 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:06.127963 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:06.423656 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:06.424949 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:06.608371 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:06.628604 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:06.924164 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:06.924953 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:07.108615 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:07.129007 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:07.424171 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:07.424999 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:07.608886 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:07.628756 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:07.786544 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:07.926683 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:07.929614 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:08.108197 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:08.129087 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:08.423866 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:08.424407 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:08.608426 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:08.628441 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:08.924009 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:08.924680 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:09.108487 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:09.128421 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:09.423250 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:09.424626 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:09.607764 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:09.628296 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:09.923752 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:09.924369 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:10.108282 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:10.128775 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:10.286079 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:10.423330 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:10.424419 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:10.608417 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:10.628309 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:10.924180 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:10.925450 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:11.107825 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:11.128085 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:11.423818 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:11.424888 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:11.607701 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:11.628640 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:11.923640 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:11.924245 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:12.108061 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:12.129105 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:12.287506 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:12.424922 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:12.425373 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:12.608172 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:12.628478 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:12.924499 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:12.925567 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:13.107976 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:13.128509 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:13.425009 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:13.425224 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:13.608323 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:13.628636 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:13.923641 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:13.924617 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:14.107745 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:14.127838 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:14.423884 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:14.424011 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:14.608057 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:14.628329 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:14.785799 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:14.924654 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:14.925939 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:15.109613 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:15.128760 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:15.424208 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:15.425109 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:15.608425 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:15.628445 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:15.923911 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:15.925392 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:16.108115 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:16.130852 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:16.424605 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:16.425002 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:16.619021 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:16.636992 2383828 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 13:26:16.637020 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:16.805123 2383828 node_ready.go:49] node "addons-133262" has status "Ready":"True"
	I0923 13:26:16.805149 2383828 node_ready.go:38] duration metric: took 41.022536428s for node "addons-133262" to be "Ready" ...
	I0923 13:26:16.805159 2383828 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:26:16.913885 2383828 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r5mdg" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:16.951549 2383828 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 13:26:16.951577 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:16.952438 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:17.127438 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:17.160927 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:17.432503 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:17.433603 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:17.608480 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:17.630006 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:17.925104 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:17.926484 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:18.107966 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:18.129404 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:18.421302 2383828 pod_ready.go:93] pod "coredns-7c65d6cfc9-r5mdg" in "kube-system" namespace has status "Ready":"True"
	I0923 13:26:18.421379 2383828 pod_ready.go:82] duration metric: took 1.507456205s for pod "coredns-7c65d6cfc9-r5mdg" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.421409 2383828 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.425730 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:18.427429 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:18.429046 2383828 pod_ready.go:93] pod "etcd-addons-133262" in "kube-system" namespace has status "Ready":"True"
	I0923 13:26:18.429069 2383828 pod_ready.go:82] duration metric: took 7.651873ms for pod "etcd-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.429084 2383828 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.434109 2383828 pod_ready.go:93] pod "kube-apiserver-addons-133262" in "kube-system" namespace has status "Ready":"True"
	I0923 13:26:18.434138 2383828 pod_ready.go:82] duration metric: took 5.046437ms for pod "kube-apiserver-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.434150 2383828 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.439598 2383828 pod_ready.go:93] pod "kube-controller-manager-addons-133262" in "kube-system" namespace has status "Ready":"True"
	I0923 13:26:18.439681 2383828 pod_ready.go:82] duration metric: took 5.521536ms for pod "kube-controller-manager-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.439712 2383828 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qsbr8" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.448014 2383828 pod_ready.go:93] pod "kube-proxy-qsbr8" in "kube-system" namespace has status "Ready":"True"
	I0923 13:26:18.448041 2383828 pod_ready.go:82] duration metric: took 8.31315ms for pod "kube-proxy-qsbr8" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.448052 2383828 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.608120 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:18.629275 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:18.819692 2383828 pod_ready.go:93] pod "kube-scheduler-addons-133262" in "kube-system" namespace has status "Ready":"True"
	I0923 13:26:18.819716 2383828 pod_ready.go:82] duration metric: took 371.655421ms for pod "kube-scheduler-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.819728 2383828 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.925018 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:18.926638 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:19.108614 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:19.129912 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:19.426498 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:19.434643 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:19.609140 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:19.630745 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:19.926844 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:19.927339 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:20.114093 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:20.130462 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:20.425302 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:20.425636 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:20.609266 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:20.630594 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:20.827914 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:20.927587 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:20.929214 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:21.108706 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:21.132032 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:21.424631 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:21.425874 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:21.609147 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:21.630433 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:21.925792 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:21.928179 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:22.108622 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:22.129868 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:22.427188 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:22.428634 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:22.609061 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:22.630978 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:22.927405 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:22.928806 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:23.107580 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:23.130630 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:23.335949 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:23.426013 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:23.427232 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:23.610362 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:23.631331 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:23.927726 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:23.929185 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:24.108527 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:24.130795 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:24.425075 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:24.426215 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:24.608374 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:24.629451 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:24.928086 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:24.931538 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:25.111794 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:25.131785 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:25.426856 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:25.427580 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:25.608708 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:25.630668 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:25.825870 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:25.928651 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:25.929663 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:26.108641 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:26.131563 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:26.427286 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:26.427960 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:26.608115 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:26.633744 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:26.926459 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:26.927726 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:27.109104 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:27.130705 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:27.427053 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:27.427397 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:27.624099 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:27.630018 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:27.828338 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:27.928155 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:27.929683 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:28.110349 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:28.143960 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:28.433172 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:28.435357 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:28.609573 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:28.630892 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:28.925681 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:28.926271 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:29.108330 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:29.129303 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:29.424904 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:29.425741 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:29.608510 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:29.710465 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:29.923972 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:29.925106 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:30.108770 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:30.130201 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:30.326027 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:30.426261 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:30.427582 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:30.608276 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:30.630718 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:30.924344 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:30.926833 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:31.108072 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:31.130159 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:31.427336 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:31.428549 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:31.608286 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:31.710749 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:31.924625 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:31.925735 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:32.108128 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:32.129672 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:32.424870 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:32.425427 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:32.608995 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:32.630396 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:32.826025 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:32.925488 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:32.927087 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:33.111899 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:33.131694 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:33.426016 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:33.427559 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:33.609572 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:33.630371 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:33.924532 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:33.925748 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:34.107968 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:34.129639 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:34.424332 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:34.425344 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:34.608778 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:34.630277 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:34.826657 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:34.925321 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:34.926268 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:35.108611 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:35.129999 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:35.424498 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:35.425426 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:35.608167 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:35.629677 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:35.938811 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:35.939969 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:36.109842 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:36.130915 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:36.424698 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:36.426376 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:36.612327 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:36.631843 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:36.827050 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:36.930622 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:36.932379 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:37.108391 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:37.130797 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:37.427855 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:37.429065 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:37.609000 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:37.631251 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:37.927282 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:37.928823 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:38.108882 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:38.130959 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:38.428793 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:38.430557 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:38.609116 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:38.631836 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:38.924409 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:38.924683 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:39.107807 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:39.130613 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:39.326392 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:39.424626 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:39.425794 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:39.607871 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:39.629703 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:39.925592 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:39.925659 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:40.107840 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:40.129321 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:40.425352 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:40.425960 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:40.614276 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:40.629941 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:40.925044 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:40.926229 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:41.108593 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:41.130189 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:41.426729 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:41.427709 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:41.608965 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:41.630359 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:41.826920 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:41.924926 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:41.925551 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:42.109578 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:42.133108 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:42.425407 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:42.427740 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:42.608654 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:42.630826 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:42.931020 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:42.937353 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:43.108583 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:43.132574 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:43.424400 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:43.425098 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:43.609963 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:43.629537 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:43.924682 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:43.926264 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:44.110084 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:44.130069 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:44.325695 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:44.424265 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:44.425922 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:44.608504 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:44.629877 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:44.924522 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:44.925695 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:45.110242 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:45.130936 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:45.424945 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:45.425411 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:45.608367 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:45.630543 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:45.925795 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:45.928543 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:46.109089 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:46.130246 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:46.326623 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:46.426401 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:46.427938 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:46.608698 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:46.631073 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:46.927536 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:46.929324 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:47.109064 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:47.130949 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:47.426667 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:47.427618 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:47.608470 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:47.630383 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:47.928101 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:47.929423 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:48.108252 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:48.131080 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:48.332183 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:48.425596 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:48.426896 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:48.610248 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:48.630206 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:48.925882 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:48.927170 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:49.108773 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:49.129674 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:49.433796 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:49.434273 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:49.608250 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:49.629496 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:49.924209 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:49.927394 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:50.112393 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:50.141056 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:50.426432 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:50.427848 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:50.609193 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:50.629564 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:50.826654 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:50.925277 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:50.925459 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:51.109502 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:51.129979 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:51.424931 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:51.426827 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:51.607779 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:51.630914 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:51.925515 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:51.926128 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:52.107821 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:52.129416 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:52.426982 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:52.428045 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:52.609048 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:52.635441 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:52.829448 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:52.927255 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:52.928637 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:53.114000 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:53.135575 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:53.425124 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:53.426490 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:53.608173 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:53.632288 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:53.924879 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:53.925839 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:54.108431 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:54.130064 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:54.423850 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:54.424803 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:54.608858 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:54.631272 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:54.925937 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:54.927386 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:55.114505 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:55.137604 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:55.336635 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:55.425715 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:55.427081 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:55.608839 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:55.632770 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:55.925063 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:55.925569 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:56.115411 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:56.131630 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:56.425028 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:56.426021 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:56.608664 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:56.629866 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:56.926440 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:56.926859 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:57.108467 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:57.130256 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:57.425565 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:57.426881 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:57.609766 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:57.631522 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:57.848276 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:57.925589 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:57.926613 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:58.108061 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:58.130231 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:58.428638 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:58.430028 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:58.610101 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:58.630423 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:58.939227 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:58.940370 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:59.108063 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:59.129831 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:59.424864 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:59.425049 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:59.608521 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:59.629400 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:59.924929 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:59.925607 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:00.109319 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:00.131385 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:00.326741 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:00.425134 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:00.425736 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:00.608187 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:00.630261 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:00.924611 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:00.925609 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:01.108482 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:01.131704 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:01.430029 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:01.434957 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:01.607779 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:01.630225 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:01.942371 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:01.943654 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:02.108432 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:02.130860 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:02.424998 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:02.426036 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:02.608703 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:02.631070 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:02.826705 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:02.940137 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:02.940713 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:03.108948 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:03.129909 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:03.425654 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:03.428546 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:03.608123 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:03.630220 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:03.929094 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:03.929953 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:04.108375 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:04.130124 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:04.425986 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:04.428220 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:04.609126 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:04.632554 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:04.828070 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:04.924862 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:04.926426 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:05.108702 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:05.130029 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:05.430290 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:05.432601 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:05.609965 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:05.629241 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:05.965785 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:05.986261 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:06.113906 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:06.221296 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:06.426093 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:06.426830 2383828 kapi.go:107] duration metric: took 1m28.007722418s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 13:27:06.609440 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:06.630984 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:06.828181 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:06.925007 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:07.108169 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:07.130553 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:07.429446 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:07.610515 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:07.631178 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:07.928566 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:08.153119 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:08.155484 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:08.425404 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:08.608582 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:08.631061 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:08.924414 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:09.108227 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:09.132719 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:09.326297 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:09.426358 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:09.608437 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:09.630725 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:09.925223 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:10.109249 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:10.132124 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:10.425940 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:10.608143 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:10.629578 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:10.938523 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:11.109170 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:11.130262 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:11.427987 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:11.610666 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:11.635149 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:11.825783 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:11.924894 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:12.110369 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:12.130346 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:12.424588 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:12.607769 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:12.629949 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:12.930645 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:13.108546 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:13.135919 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:13.426445 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:13.608884 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:13.630692 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:13.827763 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:13.925431 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:14.109183 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:14.129742 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:14.424960 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:14.608136 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:14.630105 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:14.924059 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:15.110266 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:15.130293 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:15.429250 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:15.609425 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:15.630519 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:15.925153 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:16.108867 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:16.130191 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:16.326153 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:16.424176 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:16.608289 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:16.629486 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:16.924711 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:17.108323 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:17.129255 2383828 kapi.go:107] duration metric: took 1m38.00448827s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 13:27:17.424604 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:17.607610 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:17.924643 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:18.108043 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:18.326219 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:18.424275 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:18.608499 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:18.924779 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:19.108343 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:19.424411 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:19.607534 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:19.925719 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:20.107995 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:20.326285 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:20.424926 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:20.608395 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:20.925436 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:21.108021 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:21.424136 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:21.608172 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:21.925823 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:22.109384 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:22.329093 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:22.425194 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:22.608312 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:22.924266 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:23.108430 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:23.425129 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:23.608158 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:23.925678 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:24.108712 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:24.424294 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:24.608749 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:24.830907 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:24.927893 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:25.115382 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:25.425227 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:25.608049 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:25.925661 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:26.108570 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:26.424563 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:26.608660 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:26.839697 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:26.926678 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:27.109456 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:27.427209 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:27.608835 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:27.924668 2383828 kapi.go:107] duration metric: took 1m49.504828577s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 13:27:28.108170 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:28.608618 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:29.109389 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:29.328626 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:29.609997 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:30.109077 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:30.609336 2383828 kapi.go:107] duration metric: took 1m47.004794044s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 13:27:30.611924 2383828 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-133262 cluster.
	I0923 13:27:30.614489 2383828 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 13:27:30.617196 2383828 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 13:27:30.620413 2383828 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0923 13:27:30.622924 2383828 addons.go:510] duration metric: took 1m58.634040955s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0923 13:27:31.825787 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:34.326055 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:36.326433 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:36.827741 2383828 pod_ready.go:93] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"True"
	I0923 13:27:36.827771 2383828 pod_ready.go:82] duration metric: took 1m18.008034234s for pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace to be "Ready" ...
	I0923 13:27:36.827784 2383828 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4m26g" in "kube-system" namespace to be "Ready" ...
	I0923 13:27:36.834630 2383828 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-4m26g" in "kube-system" namespace has status "Ready":"True"
	I0923 13:27:36.834660 2383828 pod_ready.go:82] duration metric: took 6.867982ms for pod "nvidia-device-plugin-daemonset-4m26g" in "kube-system" namespace to be "Ready" ...
	I0923 13:27:36.834682 2383828 pod_ready.go:39] duration metric: took 1m20.029511263s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:27:36.834698 2383828 api_server.go:52] waiting for apiserver process to appear ...
	I0923 13:27:36.834732 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 13:27:36.834794 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 13:27:36.888124 2383828 cri.go:89] found id: "9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23"
	I0923 13:27:36.888148 2383828 cri.go:89] found id: ""
	I0923 13:27:36.888156 2383828 logs.go:276] 1 containers: [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23]
	I0923 13:27:36.888219 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:36.893253 2383828 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 13:27:36.893387 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 13:27:36.933867 2383828 cri.go:89] found id: "227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7"
	I0923 13:27:36.933890 2383828 cri.go:89] found id: ""
	I0923 13:27:36.933898 2383828 logs.go:276] 1 containers: [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7]
	I0923 13:27:36.933953 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:36.937393 2383828 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 13:27:36.937521 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 13:27:36.975388 2383828 cri.go:89] found id: "62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6"
	I0923 13:27:36.975410 2383828 cri.go:89] found id: ""
	I0923 13:27:36.975418 2383828 logs.go:276] 1 containers: [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6]
	I0923 13:27:36.975488 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:36.978917 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 13:27:36.978992 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 13:27:37.026940 2383828 cri.go:89] found id: "1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09"
	I0923 13:27:37.026968 2383828 cri.go:89] found id: ""
	I0923 13:27:37.026976 2383828 logs.go:276] 1 containers: [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09]
	I0923 13:27:37.027036 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:37.031174 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 13:27:37.031273 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 13:27:37.088807 2383828 cri.go:89] found id: "6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d"
	I0923 13:27:37.088831 2383828 cri.go:89] found id: ""
	I0923 13:27:37.088838 2383828 logs.go:276] 1 containers: [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d]
	I0923 13:27:37.088896 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:37.092489 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 13:27:37.092589 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 13:27:37.130778 2383828 cri.go:89] found id: "3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a"
	I0923 13:27:37.130803 2383828 cri.go:89] found id: ""
	I0923 13:27:37.130810 2383828 logs.go:276] 1 containers: [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a]
	I0923 13:27:37.130892 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:37.134501 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 13:27:37.134578 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 13:27:37.173172 2383828 cri.go:89] found id: "de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78"
	I0923 13:27:37.173194 2383828 cri.go:89] found id: ""
	I0923 13:27:37.173202 2383828 logs.go:276] 1 containers: [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78]
	I0923 13:27:37.173269 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:37.177038 2383828 logs.go:123] Gathering logs for dmesg ...
	I0923 13:27:37.177064 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 13:27:37.199500 2383828 logs.go:123] Gathering logs for kube-apiserver [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23] ...
	I0923 13:27:37.199538 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23"
	I0923 13:27:37.265609 2383828 logs.go:123] Gathering logs for coredns [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6] ...
	I0923 13:27:37.265654 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6"
	I0923 13:27:37.308188 2383828 logs.go:123] Gathering logs for kube-scheduler [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09] ...
	I0923 13:27:37.308222 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09"
	I0923 13:27:37.364448 2383828 logs.go:123] Gathering logs for kube-proxy [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d] ...
	I0923 13:27:37.364484 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d"
	I0923 13:27:37.407944 2383828 logs.go:123] Gathering logs for CRI-O ...
	I0923 13:27:37.407976 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 13:27:37.503765 2383828 logs.go:123] Gathering logs for kubelet ...
	I0923 13:27:37.503806 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 13:27:37.536529 2383828 logs.go:138] Found kubelet problem: Sep 23 13:26:16 addons-133262 kubelet[1502]: W0923 13:26:16.549217    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-133262" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-133262' and this object
	W0923 13:27:37.536775 2383828 logs.go:138] Found kubelet problem: Sep 23 13:26:16 addons-133262 kubelet[1502]: E0923 13:26:16.549260    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-133262\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-133262' and this object" logger="UnhandledError"
	I0923 13:27:37.596083 2383828 logs.go:123] Gathering logs for describe nodes ...
	I0923 13:27:37.596124 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 13:27:37.773537 2383828 logs.go:123] Gathering logs for etcd [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7] ...
	I0923 13:27:37.773566 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7"
	I0923 13:27:37.829819 2383828 logs.go:123] Gathering logs for kube-controller-manager [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a] ...
	I0923 13:27:37.829851 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a"
	I0923 13:27:37.903553 2383828 logs.go:123] Gathering logs for kindnet [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78] ...
	I0923 13:27:37.903589 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78"
	I0923 13:27:37.949912 2383828 logs.go:123] Gathering logs for container status ...
	I0923 13:27:37.949945 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 13:27:38.018475 2383828 out.go:358] Setting ErrFile to fd 2...
	I0923 13:27:38.018552 2383828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 13:27:38.018621 2383828 out.go:270] X Problems detected in kubelet:
	W0923 13:27:38.018634 2383828 out.go:270]   Sep 23 13:26:16 addons-133262 kubelet[1502]: W0923 13:26:16.549217    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-133262" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-133262' and this object
	W0923 13:27:38.018644 2383828 out.go:270]   Sep 23 13:26:16 addons-133262 kubelet[1502]: E0923 13:26:16.549260    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-133262\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-133262' and this object" logger="UnhandledError"
	I0923 13:27:38.018658 2383828 out.go:358] Setting ErrFile to fd 2...
	I0923 13:27:38.018665 2383828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:27:48.019881 2383828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:27:48.035316 2383828 api_server.go:72] duration metric: took 2m16.046841632s to wait for apiserver process to appear ...
	I0923 13:27:48.035344 2383828 api_server.go:88] waiting for apiserver healthz status ...
	I0923 13:27:48.035384 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 13:27:48.035446 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 13:27:48.085240 2383828 cri.go:89] found id: "9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23"
	I0923 13:27:48.085263 2383828 cri.go:89] found id: ""
	I0923 13:27:48.085271 2383828 logs.go:276] 1 containers: [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23]
	I0923 13:27:48.085332 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.089041 2383828 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 13:27:48.089114 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 13:27:48.127126 2383828 cri.go:89] found id: "227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7"
	I0923 13:27:48.127146 2383828 cri.go:89] found id: ""
	I0923 13:27:48.127154 2383828 logs.go:276] 1 containers: [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7]
	I0923 13:27:48.127220 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.130855 2383828 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 13:27:48.130931 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 13:27:48.169933 2383828 cri.go:89] found id: "62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6"
	I0923 13:27:48.169956 2383828 cri.go:89] found id: ""
	I0923 13:27:48.169964 2383828 logs.go:276] 1 containers: [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6]
	I0923 13:27:48.170017 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.173593 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 13:27:48.173666 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 13:27:48.217851 2383828 cri.go:89] found id: "1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09"
	I0923 13:27:48.217875 2383828 cri.go:89] found id: ""
	I0923 13:27:48.217920 2383828 logs.go:276] 1 containers: [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09]
	I0923 13:27:48.217983 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.221539 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 13:27:48.221608 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 13:27:48.260958 2383828 cri.go:89] found id: "6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d"
	I0923 13:27:48.260982 2383828 cri.go:89] found id: ""
	I0923 13:27:48.260990 2383828 logs.go:276] 1 containers: [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d]
	I0923 13:27:48.261047 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.264814 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 13:27:48.264887 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 13:27:48.303207 2383828 cri.go:89] found id: "3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a"
	I0923 13:27:48.303227 2383828 cri.go:89] found id: ""
	I0923 13:27:48.303234 2383828 logs.go:276] 1 containers: [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a]
	I0923 13:27:48.303290 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.307190 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 13:27:48.307311 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 13:27:48.345328 2383828 cri.go:89] found id: "de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78"
	I0923 13:27:48.345353 2383828 cri.go:89] found id: ""
	I0923 13:27:48.345361 2383828 logs.go:276] 1 containers: [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78]
	I0923 13:27:48.345415 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.349052 2383828 logs.go:123] Gathering logs for CRI-O ...
	I0923 13:27:48.349077 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 13:27:48.440481 2383828 logs.go:123] Gathering logs for kubelet ...
	I0923 13:27:48.440519 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 13:27:48.471627 2383828 logs.go:138] Found kubelet problem: Sep 23 13:26:16 addons-133262 kubelet[1502]: W0923 13:26:16.549217    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-133262" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-133262' and this object
	W0923 13:27:48.471961 2383828 logs.go:138] Found kubelet problem: Sep 23 13:26:16 addons-133262 kubelet[1502]: E0923 13:26:16.549260    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-133262\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-133262' and this object" logger="UnhandledError"
	I0923 13:27:48.532975 2383828 logs.go:123] Gathering logs for describe nodes ...
	I0923 13:27:48.533015 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 13:27:48.676516 2383828 logs.go:123] Gathering logs for kube-apiserver [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23] ...
	I0923 13:27:48.676551 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23"
	I0923 13:27:48.743456 2383828 logs.go:123] Gathering logs for etcd [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7] ...
	I0923 13:27:48.743491 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7"
	I0923 13:27:48.801610 2383828 logs.go:123] Gathering logs for coredns [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6] ...
	I0923 13:27:48.801645 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6"
	I0923 13:27:48.844944 2383828 logs.go:123] Gathering logs for kube-scheduler [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09] ...
	I0923 13:27:48.844975 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09"
	I0923 13:27:48.892863 2383828 logs.go:123] Gathering logs for kube-controller-manager [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a] ...
	I0923 13:27:48.892898 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a"
	I0923 13:27:48.965213 2383828 logs.go:123] Gathering logs for dmesg ...
	I0923 13:27:48.965246 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 13:27:48.982076 2383828 logs.go:123] Gathering logs for kube-proxy [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d] ...
	I0923 13:27:48.982107 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d"
	I0923 13:27:49.032446 2383828 logs.go:123] Gathering logs for kindnet [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78] ...
	I0923 13:27:49.032476 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78"
	I0923 13:27:49.081688 2383828 logs.go:123] Gathering logs for container status ...
	I0923 13:27:49.081717 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 13:27:49.140973 2383828 out.go:358] Setting ErrFile to fd 2...
	I0923 13:27:49.141006 2383828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 13:27:49.141069 2383828 out.go:270] X Problems detected in kubelet:
	W0923 13:27:49.141087 2383828 out.go:270]   Sep 23 13:26:16 addons-133262 kubelet[1502]: W0923 13:26:16.549217    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-133262" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-133262' and this object
	W0923 13:27:49.141102 2383828 out.go:270]   Sep 23 13:26:16 addons-133262 kubelet[1502]: E0923 13:26:16.549260    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-133262\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-133262' and this object" logger="UnhandledError"
	I0923 13:27:49.141110 2383828 out.go:358] Setting ErrFile to fd 2...
	I0923 13:27:49.141123 2383828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:27:59.141822 2383828 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:27:59.149585 2383828 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0923 13:27:59.150578 2383828 api_server.go:141] control plane version: v1.31.1
	I0923 13:27:59.150608 2383828 api_server.go:131] duration metric: took 11.115252928s to wait for apiserver health ...
	I0923 13:27:59.150617 2383828 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 13:27:59.150645 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 13:27:59.150719 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 13:27:59.197911 2383828 cri.go:89] found id: "9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23"
	I0923 13:27:59.197932 2383828 cri.go:89] found id: ""
	I0923 13:27:59.197941 2383828 logs.go:276] 1 containers: [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23]
	I0923 13:27:59.197995 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.201940 2383828 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 13:27:59.202006 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 13:27:59.238531 2383828 cri.go:89] found id: "227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7"
	I0923 13:27:59.238551 2383828 cri.go:89] found id: ""
	I0923 13:27:59.238559 2383828 logs.go:276] 1 containers: [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7]
	I0923 13:27:59.238611 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.242085 2383828 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 13:27:59.242204 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 13:27:59.280989 2383828 cri.go:89] found id: "62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6"
	I0923 13:27:59.281010 2383828 cri.go:89] found id: ""
	I0923 13:27:59.281017 2383828 logs.go:276] 1 containers: [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6]
	I0923 13:27:59.281074 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.284557 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 13:27:59.284637 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 13:27:59.324082 2383828 cri.go:89] found id: "1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09"
	I0923 13:27:59.324103 2383828 cri.go:89] found id: ""
	I0923 13:27:59.324111 2383828 logs.go:276] 1 containers: [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09]
	I0923 13:27:59.324165 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.327636 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 13:27:59.327740 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 13:27:59.365535 2383828 cri.go:89] found id: "6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d"
	I0923 13:27:59.365562 2383828 cri.go:89] found id: ""
	I0923 13:27:59.365572 2383828 logs.go:276] 1 containers: [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d]
	I0923 13:27:59.365643 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.369260 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 13:27:59.369333 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 13:27:59.406889 2383828 cri.go:89] found id: "3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a"
	I0923 13:27:59.406956 2383828 cri.go:89] found id: ""
	I0923 13:27:59.406971 2383828 logs.go:276] 1 containers: [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a]
	I0923 13:27:59.407044 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.410404 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 13:27:59.410504 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 13:27:59.464101 2383828 cri.go:89] found id: "de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78"
	I0923 13:27:59.464123 2383828 cri.go:89] found id: ""
	I0923 13:27:59.464130 2383828 logs.go:276] 1 containers: [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78]
	I0923 13:27:59.464210 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.467715 2383828 logs.go:123] Gathering logs for dmesg ...
	I0923 13:27:59.467741 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 13:27:59.484127 2383828 logs.go:123] Gathering logs for kube-apiserver [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23] ...
	I0923 13:27:59.484159 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23"
	I0923 13:27:59.535894 2383828 logs.go:123] Gathering logs for kube-proxy [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d] ...
	I0923 13:27:59.535971 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d"
	I0923 13:27:59.581931 2383828 logs.go:123] Gathering logs for container status ...
	I0923 13:27:59.581956 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 13:27:59.630190 2383828 logs.go:123] Gathering logs for kube-controller-manager [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a] ...
	I0923 13:27:59.630220 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a"
	I0923 13:27:59.697374 2383828 logs.go:123] Gathering logs for kindnet [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78] ...
	I0923 13:27:59.697409 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78"
	I0923 13:27:59.735991 2383828 logs.go:123] Gathering logs for CRI-O ...
	I0923 13:27:59.736021 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 13:27:59.826571 2383828 logs.go:123] Gathering logs for kubelet ...
	I0923 13:27:59.826656 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 13:27:59.899998 2383828 logs.go:123] Gathering logs for describe nodes ...
	I0923 13:27:59.900035 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 13:28:00.099569 2383828 logs.go:123] Gathering logs for etcd [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7] ...
	I0923 13:28:00.099607 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7"
	I0923 13:28:00.174513 2383828 logs.go:123] Gathering logs for coredns [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6] ...
	I0923 13:28:00.174556 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6"
	I0923 13:28:00.241997 2383828 logs.go:123] Gathering logs for kube-scheduler [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09] ...
	I0923 13:28:00.242034 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09"
	I0923 13:28:02.811205 2383828 system_pods.go:59] 18 kube-system pods found
	I0923 13:28:02.811254 2383828 system_pods.go:61] "coredns-7c65d6cfc9-r5mdg" [244c7077-c0d1-4d2d-92f7-49811a2e7840] Running
	I0923 13:28:02.811262 2383828 system_pods.go:61] "csi-hostpath-attacher-0" [2dfdc637-b058-47a4-8127-066e22a8c844] Running
	I0923 13:28:02.811268 2383828 system_pods.go:61] "csi-hostpath-resizer-0" [bf94dfec-f4ec-4276-8c84-e9d52b353dd1] Running
	I0923 13:28:02.811273 2383828 system_pods.go:61] "csi-hostpathplugin-4l5sb" [4b14671b-9a65-4b4f-9656-1a542720db35] Running
	I0923 13:28:02.811278 2383828 system_pods.go:61] "etcd-addons-133262" [ccd2243d-7923-4bd5-aad1-4bcdf84093b0] Running
	I0923 13:28:02.811282 2383828 system_pods.go:61] "kindnet-j682f" [30af3434-889d-4dfc-933a-a18b65eae56b] Running
	I0923 13:28:02.811286 2383828 system_pods.go:61] "kube-apiserver-addons-133262" [a07b8088-fb80-4c58-9f12-a59ce48acae6] Running
	I0923 13:28:02.811290 2383828 system_pods.go:61] "kube-controller-manager-addons-133262" [402fc2e9-9278-4d3c-ba42-58cf9e6f7256] Running
	I0923 13:28:02.811295 2383828 system_pods.go:61] "kube-ingress-dns-minikube" [f3f96ece-39b2-4aef-afc3-deeac0208c34] Running
	I0923 13:28:02.811299 2383828 system_pods.go:61] "kube-proxy-qsbr8" [352eb868-c25d-49b6-9c55-9960dc2cdf8e] Running
	I0923 13:28:02.811303 2383828 system_pods.go:61] "kube-scheduler-addons-133262" [a1b18f24-3925-4dbd-adbf-b70661d68d91] Running
	I0923 13:28:02.811307 2383828 system_pods.go:61] "metrics-server-84c5f94fbc-dqnhw" [6d7335f6-5dfb-4227-9606-8d8b1b126d40] Running
	I0923 13:28:02.811321 2383828 system_pods.go:61] "nvidia-device-plugin-daemonset-4m26g" [c0e73bf1-5273-4a14-9517-202ce22276b8] Running
	I0923 13:28:02.811325 2383828 system_pods.go:61] "registry-66c9cd494c-2g5d2" [d093e650-6688-49f8-9c46-28a49dd5a974] Running
	I0923 13:28:02.811328 2383828 system_pods.go:61] "registry-proxy-pqtjc" [cb6ceb80-6e9e-4cb0-8229-2ffe7f03b5f8] Running
	I0923 13:28:02.811339 2383828 system_pods.go:61] "snapshot-controller-56fcc65765-5t68w" [15a9f6f7-dd61-455c-be65-26312ab5fa53] Running
	I0923 13:28:02.811343 2383828 system_pods.go:61] "snapshot-controller-56fcc65765-mjwxw" [8d203518-0a49-462e-b208-58bf3d4f9059] Running
	I0923 13:28:02.811346 2383828 system_pods.go:61] "storage-provisioner" [c54ff386-7dac-4422-9ce3-010b14a0da61] Running
	I0923 13:28:02.811353 2383828 system_pods.go:74] duration metric: took 3.660729215s to wait for pod list to return data ...
	I0923 13:28:02.811364 2383828 default_sa.go:34] waiting for default service account to be created ...
	I0923 13:28:02.814522 2383828 default_sa.go:45] found service account: "default"
	I0923 13:28:02.814550 2383828 default_sa.go:55] duration metric: took 3.179207ms for default service account to be created ...
	I0923 13:28:02.814561 2383828 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 13:28:02.824546 2383828 system_pods.go:86] 18 kube-system pods found
	I0923 13:28:02.824586 2383828 system_pods.go:89] "coredns-7c65d6cfc9-r5mdg" [244c7077-c0d1-4d2d-92f7-49811a2e7840] Running
	I0923 13:28:02.824595 2383828 system_pods.go:89] "csi-hostpath-attacher-0" [2dfdc637-b058-47a4-8127-066e22a8c844] Running
	I0923 13:28:02.824600 2383828 system_pods.go:89] "csi-hostpath-resizer-0" [bf94dfec-f4ec-4276-8c84-e9d52b353dd1] Running
	I0923 13:28:02.824627 2383828 system_pods.go:89] "csi-hostpathplugin-4l5sb" [4b14671b-9a65-4b4f-9656-1a542720db35] Running
	I0923 13:28:02.824639 2383828 system_pods.go:89] "etcd-addons-133262" [ccd2243d-7923-4bd5-aad1-4bcdf84093b0] Running
	I0923 13:28:02.824644 2383828 system_pods.go:89] "kindnet-j682f" [30af3434-889d-4dfc-933a-a18b65eae56b] Running
	I0923 13:28:02.824650 2383828 system_pods.go:89] "kube-apiserver-addons-133262" [a07b8088-fb80-4c58-9f12-a59ce48acae6] Running
	I0923 13:28:02.824661 2383828 system_pods.go:89] "kube-controller-manager-addons-133262" [402fc2e9-9278-4d3c-ba42-58cf9e6f7256] Running
	I0923 13:28:02.824666 2383828 system_pods.go:89] "kube-ingress-dns-minikube" [f3f96ece-39b2-4aef-afc3-deeac0208c34] Running
	I0923 13:28:02.824670 2383828 system_pods.go:89] "kube-proxy-qsbr8" [352eb868-c25d-49b6-9c55-9960dc2cdf8e] Running
	I0923 13:28:02.824680 2383828 system_pods.go:89] "kube-scheduler-addons-133262" [a1b18f24-3925-4dbd-adbf-b70661d68d91] Running
	I0923 13:28:02.824685 2383828 system_pods.go:89] "metrics-server-84c5f94fbc-dqnhw" [6d7335f6-5dfb-4227-9606-8d8b1b126d40] Running
	I0923 13:28:02.824707 2383828 system_pods.go:89] "nvidia-device-plugin-daemonset-4m26g" [c0e73bf1-5273-4a14-9517-202ce22276b8] Running
	I0923 13:28:02.824719 2383828 system_pods.go:89] "registry-66c9cd494c-2g5d2" [d093e650-6688-49f8-9c46-28a49dd5a974] Running
	I0923 13:28:02.824724 2383828 system_pods.go:89] "registry-proxy-pqtjc" [cb6ceb80-6e9e-4cb0-8229-2ffe7f03b5f8] Running
	I0923 13:28:02.824744 2383828 system_pods.go:89] "snapshot-controller-56fcc65765-5t68w" [15a9f6f7-dd61-455c-be65-26312ab5fa53] Running
	I0923 13:28:02.824749 2383828 system_pods.go:89] "snapshot-controller-56fcc65765-mjwxw" [8d203518-0a49-462e-b208-58bf3d4f9059] Running
	I0923 13:28:02.824755 2383828 system_pods.go:89] "storage-provisioner" [c54ff386-7dac-4422-9ce3-010b14a0da61] Running
	I0923 13:28:02.824763 2383828 system_pods.go:126] duration metric: took 10.19587ms to wait for k8s-apps to be running ...
	I0923 13:28:02.824776 2383828 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 13:28:02.824845 2383828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:28:02.836586 2383828 system_svc.go:56] duration metric: took 11.795464ms WaitForService to wait for kubelet
	I0923 13:28:02.836625 2383828 kubeadm.go:582] duration metric: took 2m30.848156578s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:28:02.836643 2383828 node_conditions.go:102] verifying NodePressure condition ...
	I0923 13:28:02.840270 2383828 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0923 13:28:02.840307 2383828 node_conditions.go:123] node cpu capacity is 2
	I0923 13:28:02.840319 2383828 node_conditions.go:105] duration metric: took 3.655882ms to run NodePressure ...
	I0923 13:28:02.840330 2383828 start.go:241] waiting for startup goroutines ...
	I0923 13:28:02.840338 2383828 start.go:246] waiting for cluster config update ...
	I0923 13:28:02.840354 2383828 start.go:255] writing updated cluster config ...
	I0923 13:28:02.840649 2383828 ssh_runner.go:195] Run: rm -f paused
	I0923 13:28:03.209187 2383828 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 13:28:03.213065 2383828 out.go:177] * Done! kubectl is now configured to use "addons-133262" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 13:37:15 addons-133262 crio[966]: time="2024-09-23 13:37:15.879525064Z" level=info msg="Removed container bcd42d553ae6033832332420baf58f386527e8b5fa044b7d8fe92de503ddf35d: gadget/gadget-ncv7d/gadget" id=7fac8e9a-ee68-402b-ac22-4ba253074f2e name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 23 13:37:17 addons-133262 crio[966]: time="2024-09-23 13:37:17.149042285Z" level=info msg="Stopping pod sandbox: b7880c0cb9257b385fe31c5eabdc0191a6a31752f993d5ca78cd0f63a8aae463" id=f76b460a-7b7b-4c10-bbcb-9f15d0a97063 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 23 13:37:17 addons-133262 crio[966]: time="2024-09-23 13:37:17.149322918Z" level=info msg="Got pod network &{Name:registry-test Namespace:default ID:b7880c0cb9257b385fe31c5eabdc0191a6a31752f993d5ca78cd0f63a8aae463 UID:2b0a3292-cbd1-4a87-bddf-6234359cdf59 NetNS:/var/run/netns/27c20095-695d-4530-bd3e-816411b93ba3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 23 13:37:17 addons-133262 crio[966]: time="2024-09-23 13:37:17.149467440Z" level=info msg="Deleting pod default_registry-test from CNI network \"kindnet\" (type=ptp)"
	Sep 23 13:37:17 addons-133262 crio[966]: time="2024-09-23 13:37:17.205186790Z" level=info msg="Stopped pod sandbox: b7880c0cb9257b385fe31c5eabdc0191a6a31752f993d5ca78cd0f63a8aae463" id=f76b460a-7b7b-4c10-bbcb-9f15d0a97063 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 23 13:37:17 addons-133262 crio[966]: time="2024-09-23 13:37:17.929465976Z" level=info msg="Stopping container: aa4fd929ada1f744c7eeeb47a54a0b43ce6e3332328d568161be3bceacf6d730 (timeout: 30s)" id=cbea9ef6-6dc1-43a3-a629-56bd807c4dd3 name=/runtime.v1.RuntimeService/StopContainer
	Sep 23 13:37:17 addons-133262 conmon[3228]: conmon aa4fd929ada1f744c7ee <ninfo>: container 3239 exited with status 2
	Sep 23 13:37:17 addons-133262 crio[966]: time="2024-09-23 13:37:17.964343689Z" level=info msg="Stopping container: ab5f9daf7ef2dca463ed8db6e275e84c0307116d3d80ad74d551e94738f35762 (timeout: 30s)" id=b768912e-d8f6-43be-beed-383128579d22 name=/runtime.v1.RuntimeService/StopContainer
	Sep 23 13:37:18 addons-133262 crio[966]: time="2024-09-23 13:37:18.082239345Z" level=info msg="Stopped container aa4fd929ada1f744c7eeeb47a54a0b43ce6e3332328d568161be3bceacf6d730: kube-system/registry-66c9cd494c-2g5d2/registry" id=cbea9ef6-6dc1-43a3-a629-56bd807c4dd3 name=/runtime.v1.RuntimeService/StopContainer
	Sep 23 13:37:18 addons-133262 crio[966]: time="2024-09-23 13:37:18.083048885Z" level=info msg="Stopping pod sandbox: 7fb1bbe51aeaa15bb77f75f5e4eb209e65ab7363d206850885c8d01ac6cb868e" id=d6d478ff-d759-45ba-b6b4-a0543b12fdc7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 23 13:37:18 addons-133262 crio[966]: time="2024-09-23 13:37:18.083301548Z" level=info msg="Got pod network &{Name:registry-66c9cd494c-2g5d2 Namespace:kube-system ID:7fb1bbe51aeaa15bb77f75f5e4eb209e65ab7363d206850885c8d01ac6cb868e UID:d093e650-6688-49f8-9c46-28a49dd5a974 NetNS:/var/run/netns/c27b4cdc-bc0b-490f-ad5d-e1e5e6fc1d34 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 23 13:37:18 addons-133262 crio[966]: time="2024-09-23 13:37:18.083439973Z" level=info msg="Deleting pod kube-system_registry-66c9cd494c-2g5d2 from CNI network \"kindnet\" (type=ptp)"
	Sep 23 13:37:18 addons-133262 crio[966]: time="2024-09-23 13:37:18.115987444Z" level=info msg="Stopped pod sandbox: 7fb1bbe51aeaa15bb77f75f5e4eb209e65ab7363d206850885c8d01ac6cb868e" id=d6d478ff-d759-45ba-b6b4-a0543b12fdc7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 23 13:37:18 addons-133262 crio[966]: time="2024-09-23 13:37:18.133844978Z" level=info msg="Stopped container ab5f9daf7ef2dca463ed8db6e275e84c0307116d3d80ad74d551e94738f35762: kube-system/registry-proxy-pqtjc/registry-proxy" id=b768912e-d8f6-43be-beed-383128579d22 name=/runtime.v1.RuntimeService/StopContainer
	Sep 23 13:37:18 addons-133262 crio[966]: time="2024-09-23 13:37:18.134624758Z" level=info msg="Stopping pod sandbox: e3f3c656139d46161ae44593ec9612d69434a5ae2dbd5abe23d36f5c3d9777e3" id=1715c58d-c119-4c30-b87c-e83a8b38c2a6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 23 13:37:18 addons-133262 crio[966]: time="2024-09-23 13:37:18.144262424Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-TFFBGBAF2FNBESMB - [0:0]\n:KUBE-HP-QCC5X5SU7NASFJ2R - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-XXN7ZDG5NCGSBBQC - [0:0]\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-d2pjp_ingress-nginx_7d45db68-99b0-41cf-a495-d22b22b643fb_0_ hostport 443\" -m tcp --dport 443 -j KUBE-HP-TFFBGBAF2FNBESMB\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-d2pjp_ingress-nginx_7d45db68-99b0-41cf-a495-d22b22b643fb_0_ hostport 80\" -m tcp --dport 80 -j KUBE-HP-XXN7ZDG5NCGSBBQC\n-A KUBE-HP-TFFBGBAF2FNBESMB -s 10.244.0.19/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-d2pjp_ingress-nginx_7d45db68-99b0-41cf-a495-d22b22b643fb_0_ hostport 443\" -j KUBE-MARK-MASQ\n-A KUBE-HP-TFFBGBAF2FNBESMB -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-d2pjp_ingress-nginx_7d45db68-99b0-41cf-a4
95-d22b22b643fb_0_ hostport 443\" -m tcp -j DNAT --to-destination 10.244.0.19:443\n-A KUBE-HP-XXN7ZDG5NCGSBBQC -s 10.244.0.19/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-d2pjp_ingress-nginx_7d45db68-99b0-41cf-a495-d22b22b643fb_0_ hostport 80\" -j KUBE-MARK-MASQ\n-A KUBE-HP-XXN7ZDG5NCGSBBQC -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-d2pjp_ingress-nginx_7d45db68-99b0-41cf-a495-d22b22b643fb_0_ hostport 80\" -m tcp -j DNAT --to-destination 10.244.0.19:80\n-X KUBE-HP-QCC5X5SU7NASFJ2R\nCOMMIT\n"
	Sep 23 13:37:18 addons-133262 crio[966]: time="2024-09-23 13:37:18.147476182Z" level=info msg="Closing host port tcp:5000"
	Sep 23 13:37:18 addons-133262 crio[966]: time="2024-09-23 13:37:18.149189422Z" level=info msg="Host port tcp:5000 does not have an open socket"
	Sep 23 13:37:18 addons-133262 crio[966]: time="2024-09-23 13:37:18.149389704Z" level=info msg="Got pod network &{Name:registry-proxy-pqtjc Namespace:kube-system ID:e3f3c656139d46161ae44593ec9612d69434a5ae2dbd5abe23d36f5c3d9777e3 UID:cb6ceb80-6e9e-4cb0-8229-2ffe7f03b5f8 NetNS:/var/run/netns/f7cb9694-6ce9-4d3b-9be7-ac8dab74ed2b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 23 13:37:18 addons-133262 crio[966]: time="2024-09-23 13:37:18.149528795Z" level=info msg="Deleting pod kube-system_registry-proxy-pqtjc from CNI network \"kindnet\" (type=ptp)"
	Sep 23 13:37:18 addons-133262 crio[966]: time="2024-09-23 13:37:18.164305416Z" level=info msg="Stopped pod sandbox: e3f3c656139d46161ae44593ec9612d69434a5ae2dbd5abe23d36f5c3d9777e3" id=1715c58d-c119-4c30-b87c-e83a8b38c2a6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 23 13:37:18 addons-133262 crio[966]: time="2024-09-23 13:37:18.868922278Z" level=info msg="Removing container: aa4fd929ada1f744c7eeeb47a54a0b43ce6e3332328d568161be3bceacf6d730" id=6dbace96-7221-4d0f-a57f-e7f2ee01a4bb name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 23 13:37:18 addons-133262 crio[966]: time="2024-09-23 13:37:18.917756817Z" level=info msg="Removed container aa4fd929ada1f744c7eeeb47a54a0b43ce6e3332328d568161be3bceacf6d730: kube-system/registry-66c9cd494c-2g5d2/registry" id=6dbace96-7221-4d0f-a57f-e7f2ee01a4bb name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 23 13:37:18 addons-133262 crio[966]: time="2024-09-23 13:37:18.920306722Z" level=info msg="Removing container: ab5f9daf7ef2dca463ed8db6e275e84c0307116d3d80ad74d551e94738f35762" id=0827b91c-ab16-456d-bb34-ac878df9023d name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 23 13:37:18 addons-133262 crio[966]: time="2024-09-23 13:37:18.945761428Z" level=info msg="Removed container ab5f9daf7ef2dca463ed8db6e275e84c0307116d3d80ad74d551e94738f35762: kube-system/registry-proxy-pqtjc/registry-proxy" id=0827b91c-ab16-456d-bb34-ac878df9023d name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	c564c5ad6a593       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            5 seconds ago       Exited              gadget                                   7                   9add3703dbec8       gadget-ncv7d
	9a4c2f0385a81       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             43 seconds ago      Exited              helper-pod                               0                   efd489d23c5b1       helper-pod-delete-pvc-ba93c3ca-4ceb-4c2d-8d75-76b896b20b5e
	278bec0da9ba4       docker.io/library/busybox@sha256:71e065368796c7368a99a072019b9fe73e28e225ae9882430579ec49a1e46235                                            47 seconds ago      Exited              busybox                                  0                   f1b1530ad3f1d       test-local-path
	e616a99ec37f5       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            52 seconds ago      Exited              helper-pod                               0                   a967302547633       helper-pod-create-pvc-ba93c3ca-4ceb-4c2d-8d75-76b896b20b5e
	334680bd78e33       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                                 9 minutes ago       Running             gcp-auth                                 0                   2c1d4aa6e8775       gcp-auth-89d5ffd79-sn4tn
	0f16e342a3584       registry.k8s.io/ingress-nginx/controller@sha256:22f9d129ae8c89a2cabbd13af3c1668944f3dd68fec186199b7024a0a2fc75b3                             9 minutes ago       Running             controller                               0                   53cf4c8305e8c       ingress-nginx-controller-bc57996ff-d2pjp
	71e2c43a35435       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   948f1aedfa7f6       csi-hostpathplugin-4l5sb
	a9c0aaaf9bff8       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          10 minutes ago      Running             csi-provisioner                          0                   948f1aedfa7f6       csi-hostpathplugin-4l5sb
	1cc785331b728       420193b27261a8d37b9fb1faeed45094cefa47e72a7538fd5a6c05e8b5ce192e                                                                             10 minutes ago      Exited              patch                                    2                   f15774b38ba8c       ingress-nginx-admission-patch-lzqrp
	31f4e9083c140       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            10 minutes ago      Running             liveness-probe                           0                   948f1aedfa7f6       csi-hostpathplugin-4l5sb
	789f409548613       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           10 minutes ago      Running             hostpath                                 0                   948f1aedfa7f6       csi-hostpathplugin-4l5sb
	34fee4b563ce3       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                10 minutes ago      Running             node-driver-registrar                    0                   948f1aedfa7f6       csi-hostpathplugin-4l5sb
	96b039535fe06       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3                   10 minutes ago      Exited              create                                   0                   13108374025b0       ingress-nginx-admission-create-rxj9z
	ad12aefb24105       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      10 minutes ago      Running             volume-snapshot-controller               0                   5c81b60423957       snapshot-controller-56fcc65765-5t68w
	aa50dfa90d312       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             10 minutes ago      Running             csi-attacher                             0                   a13a95290bed0       csi-hostpath-attacher-0
	b3ed8d5ef2831       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   948f1aedfa7f6       csi-hostpathplugin-4l5sb
	90c714252640f       gcr.io/cloud-spanner-emulator/emulator@sha256:6ce1265c73355797b34d2531c7146eed3996346f860517e35d1434182eb5f01d                               10 minutes ago      Running             cloud-spanner-emulator                   0                   1a6fa386c0e34       cloud-spanner-emulator-5b584cc74-qsshz
	af5b41352257c       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      10 minutes ago      Running             volume-snapshot-controller               0                   bc6f61c0c2c61       snapshot-controller-56fcc65765-mjwxw
	09213e600f0c4       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   7772f60ad5f53       csi-hostpath-resizer-0
	b0b2fe538d362       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f                        10 minutes ago      Running             metrics-server                           0                   dbcdb7b69735c       metrics-server-84c5f94fbc-dqnhw
	53e63daea4104       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             10 minutes ago      Running             minikube-ingress-dns                     0                   416bf19586582       kube-ingress-dns-minikube
	846b4d1bcfbe3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             11 minutes ago      Running             storage-provisioner                      0                   a4e85889dbd73       storage-provisioner
	62d73ade94f57       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                                             11 minutes ago      Running             coredns                                  0                   ccac108e74df4       coredns-7c65d6cfc9-r5mdg
	6e1da3a73993a       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                                             11 minutes ago      Running             kube-proxy                               0                   3929648a8d7f9       kube-proxy-qsbr8
	de10c80270b5c       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                                             11 minutes ago      Running             kindnet-cni                              0                   107beb5e7b8ce       kindnet-j682f
	1ef3f97eb6473       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                                             11 minutes ago      Running             kube-scheduler                           0                   9b8411a580ef2       kube-scheduler-addons-133262
	3cf91c4e890ab       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                                             11 minutes ago      Running             kube-controller-manager                  0                   ed11482c3169e       kube-controller-manager-addons-133262
	9a2762b26053f       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                                             11 minutes ago      Running             kube-apiserver                           0                   02dbc597f6b2f       kube-apiserver-addons-133262
	227c9772e72a3       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                                             11 minutes ago      Running             etcd                                     0                   7c44e58ec4ddc       etcd-addons-133262
	
	
	==> coredns [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6] <==
	[INFO] 10.244.0.15:58839 - 14543 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000098385s
	[INFO] 10.244.0.15:53549 - 55590 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002790769s
	[INFO] 10.244.0.15:53549 - 53051 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003201599s
	[INFO] 10.244.0.15:57616 - 17518 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0004867s
	[INFO] 10.244.0.15:57616 - 5395 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000831857s
	[INFO] 10.244.0.15:45938 - 8747 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000161046s
	[INFO] 10.244.0.15:45938 - 40758 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000200742s
	[INFO] 10.244.0.15:35197 - 55448 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000055129s
	[INFO] 10.244.0.15:35197 - 11418 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000055506s
	[INFO] 10.244.0.15:55894 - 47736 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100871s
	[INFO] 10.244.0.15:55894 - 56694 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00011304s
	[INFO] 10.244.0.15:44812 - 41796 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001561136s
	[INFO] 10.244.0.15:44812 - 9538 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00191687s
	[INFO] 10.244.0.15:49269 - 61781 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000081385s
	[INFO] 10.244.0.15:49269 - 20566 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000043453s
	[INFO] 10.244.0.20:57660 - 31419 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000212123s
	[INFO] 10.244.0.20:32983 - 51792 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000108314s
	[INFO] 10.244.0.20:49419 - 11345 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00013397s
	[INFO] 10.244.0.20:59959 - 61304 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001039721s
	[INFO] 10.244.0.20:40904 - 968 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127275s
	[INFO] 10.244.0.20:60236 - 53744 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132911s
	[INFO] 10.244.0.20:44058 - 55419 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002363448s
	[INFO] 10.244.0.20:45850 - 62938 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002112385s
	[INFO] 10.244.0.20:37367 - 36922 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001718064s
	[INFO] 10.244.0.20:53861 - 52609 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.002198873s
	
	
	==> describe nodes <==
	Name:               addons-133262
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-133262
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=addons-133262
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T13_25_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-133262
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-133262"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:25:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-133262
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:37:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:37:01 +0000   Mon, 23 Sep 2024 13:25:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:37:01 +0000   Mon, 23 Sep 2024 13:25:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:37:01 +0000   Mon, 23 Sep 2024 13:25:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:37:01 +0000   Mon, 23 Sep 2024 13:26:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-133262
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 956a9a3790d546e98f478aa431b93546
	  System UUID:                87adfa53-2e43-424b-9596-ae2d9c13f82d
	  Boot ID:                    97839423-83c8-4f76-b1f5-7b978ef1271e
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     cloud-spanner-emulator-5b584cc74-qsshz      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-ncv7d                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-sn4tn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-d2pjp    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-r5mdg                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-4l5sb                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-addons-133262                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-j682f                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-addons-133262                250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-133262       200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-qsbr8                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-133262                100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-dqnhw             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-5t68w        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-mjwxw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             510Mi (6%)   220Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 11m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m   kubelet          Node addons-133262 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m   kubelet          Node addons-133262 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m   kubelet          Node addons-133262 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m   node-controller  Node addons-133262 event: Registered Node addons-133262 in Controller
	  Normal   NodeReady                11m   kubelet          Node addons-133262 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7] <==
	{"level":"info","ts":"2024-09-23T13:25:21.709298Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:25:21.709623Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:25:21.709728Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:25:21.710353Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:25:21.710990Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:25:21.711900Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T13:25:21.739279Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2024-09-23T13:25:34.842632Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.135341ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-09-23T13:25:34.842748Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.290389ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T13:25:34.842768Z","caller":"traceutil/trace.go:171","msg":"trace[2058929202] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:340; }","duration":"101.314314ms","start":"2024-09-23T13:25:34.741449Z","end":"2024-09-23T13:25:34.842763Z","steps":["trace[2058929202] 'range keys from in-memory index tree'  (duration: 100.503503ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:25:34.842723Z","caller":"traceutil/trace.go:171","msg":"trace[446664898] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:340; }","duration":"101.242874ms","start":"2024-09-23T13:25:34.741467Z","end":"2024-09-23T13:25:34.842710Z","steps":["trace[446664898] 'range keys from in-memory index tree'  (duration: 100.54671ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:25:35.278741Z","caller":"traceutil/trace.go:171","msg":"trace[254033020] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"109.264528ms","start":"2024-09-23T13:25:35.169436Z","end":"2024-09-23T13:25:35.278701Z","steps":["trace[254033020] 'process raft request'  (duration: 25.693032ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:25:35.543061Z","caller":"traceutil/trace.go:171","msg":"trace[1953560179] linearizableReadLoop","detail":"{readStateIndex:358; appliedIndex:358; }","duration":"246.516564ms","start":"2024-09-23T13:25:35.296531Z","end":"2024-09-23T13:25:35.543048Z","steps":["trace[1953560179] 'read index received'  (duration: 246.511993ms)","trace[1953560179] 'applied index is now lower than readState.Index'  (duration: 3.75µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T13:25:35.546513Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.962136ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T13:25:35.549918Z","caller":"traceutil/trace.go:171","msg":"trace[1294112056] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:346; }","duration":"253.35678ms","start":"2024-09-23T13:25:35.296527Z","end":"2024-09-23T13:25:35.549883Z","steps":["trace[1294112056] 'agreement among raft nodes before linearized reading'  (duration: 249.932811ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T13:25:35.585444Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.084397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-23T13:25:35.594064Z","caller":"traceutil/trace.go:171","msg":"trace[876578767] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:347; }","duration":"297.060415ms","start":"2024-09-23T13:25:35.296987Z","end":"2024-09-23T13:25:35.594048Z","steps":["trace[876578767] 'agreement among raft nodes before linearized reading'  (duration: 288.392254ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:25:38.966922Z","caller":"traceutil/trace.go:171","msg":"trace[1773688844] transaction","detail":"{read_only:false; response_revision:683; number_of_response:1; }","duration":"178.146917ms","start":"2024-09-23T13:25:38.788751Z","end":"2024-09-23T13:25:38.966898Z","steps":["trace[1773688844] 'process raft request'  (duration: 178.069176ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:25:38.969539Z","caller":"traceutil/trace.go:171","msg":"trace[208207188] linearizableReadLoop","detail":"{readStateIndex:708; appliedIndex:708; }","duration":"179.399909ms","start":"2024-09-23T13:25:38.790120Z","end":"2024-09-23T13:25:38.969520Z","steps":["trace[208207188] 'read index received'  (duration: 179.395093ms)","trace[208207188] 'applied index is now lower than readState.Index'  (duration: 3.47µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T13:25:38.995483Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.007661ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-133262\" ","response":"range_response_count:1 size:5745"}
	{"level":"info","ts":"2024-09-23T13:25:38.995537Z","caller":"traceutil/trace.go:171","msg":"trace[562549751] range","detail":"{range_begin:/registry/minions/addons-133262; range_end:; response_count:1; response_revision:683; }","duration":"207.069297ms","start":"2024-09-23T13:25:38.788455Z","end":"2024-09-23T13:25:38.995524Z","steps":["trace[562549751] 'agreement among raft nodes before linearized reading'  (duration: 181.122667ms)","trace[562549751] 'range keys from in-memory index tree'  (duration: 25.81627ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T13:25:38.995831Z","caller":"traceutil/trace.go:171","msg":"trace[317156449] transaction","detail":"{read_only:false; response_revision:684; number_of_response:1; }","duration":"198.798657ms","start":"2024-09-23T13:25:38.797023Z","end":"2024-09-23T13:25:38.995821Z","steps":["trace[317156449] 'process raft request'  (duration: 172.804094ms)","trace[317156449] 'compare'  (duration: 25.418387ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T13:35:21.883294Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1525}
	{"level":"info","ts":"2024-09-23T13:35:21.915067Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1525,"took":"31.250229ms","hash":2887434094,"current-db-size-bytes":6610944,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3317760,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-23T13:35:21.915114Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2887434094,"revision":1525,"compact-revision":-1}
	
	
	==> gcp-auth [334680bd78e33f77a791df37c38d964e3d859e5ec3bc4717d639109d0e519646] <==
	2024/09/23 13:27:30 GCP Auth Webhook started!
	2024/09/23 13:28:03 Ready to marshal response ...
	2024/09/23 13:28:03 Ready to write response ...
	2024/09/23 13:28:03 Ready to marshal response ...
	2024/09/23 13:28:03 Ready to write response ...
	2024/09/23 13:28:03 Ready to marshal response ...
	2024/09/23 13:28:03 Ready to write response ...
	2024/09/23 13:36:17 Ready to marshal response ...
	2024/09/23 13:36:17 Ready to write response ...
	2024/09/23 13:36:25 Ready to marshal response ...
	2024/09/23 13:36:25 Ready to write response ...
	2024/09/23 13:36:25 Ready to marshal response ...
	2024/09/23 13:36:25 Ready to write response ...
	2024/09/23 13:36:35 Ready to marshal response ...
	2024/09/23 13:36:35 Ready to write response ...
	
	
	==> kernel <==
	 13:37:19 up 15:19,  0 users,  load average: 0.66, 0.51, 1.46
	Linux addons-133262 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78] <==
	I0923 13:35:16.082974       1 main.go:299] handling current node
	I0923 13:35:26.083246       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:35:26.083279       1 main.go:299] handling current node
	I0923 13:35:36.082975       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:35:36.083010       1 main.go:299] handling current node
	I0923 13:35:46.083159       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:35:46.083193       1 main.go:299] handling current node
	I0923 13:35:56.083554       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:35:56.083700       1 main.go:299] handling current node
	I0923 13:36:06.083174       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:36:06.083214       1 main.go:299] handling current node
	I0923 13:36:16.083239       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:36:16.083275       1 main.go:299] handling current node
	I0923 13:36:26.082680       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:36:26.082730       1 main.go:299] handling current node
	I0923 13:36:36.082907       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:36:36.082949       1 main.go:299] handling current node
	I0923 13:36:46.082964       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:36:46.082998       1 main.go:299] handling current node
	I0923 13:36:56.082974       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:36:56.083012       1 main.go:299] handling current node
	I0923 13:37:06.083685       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:37:06.083724       1 main.go:299] handling current node
	I0923 13:37:16.082917       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:37:16.082951       1 main.go:299] handling current node
	
	
	==> kube-apiserver [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23] <==
	W0923 13:27:36.828296       1 handler_proxy.go:99] no RequestInfo found in the context
	E0923 13:27:36.828376       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0923 13:27:37.830783       1 handler_proxy.go:99] no RequestInfo found in the context
	E0923 13:27:37.830832       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0923 13:27:37.830965       1 handler_proxy.go:99] no RequestInfo found in the context
	E0923 13:27:37.831044       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0923 13:27:37.831943       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0923 13:27:37.833027       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0923 13:27:41.838289       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.89.130:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.89.130:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.89.130:443: i/o timeout" logger="UnhandledError"
	W0923 13:27:41.838333       1 handler_proxy.go:99] no RequestInfo found in the context
	E0923 13:27:41.838495       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0923 13:27:41.883186       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0923 13:36:36.296973       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0923 13:36:36.308652       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0923 13:36:36.324442       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0923 13:36:51.321281       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a] <==
	I0923 13:27:13.070498       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="1s"
	I0923 13:27:14.019767       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="1s"
	I0923 13:27:14.030720       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="1s"
	I0923 13:27:14.036641       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="1s"
	I0923 13:27:27.449701       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="69.562µs"
	I0923 13:27:29.249784       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-133262"
	I0923 13:27:30.508796       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="11.03517ms"
	I0923 13:27:30.509207       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="65.041µs"
	E0923 13:27:31.436510       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0923 13:27:31.986821       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0923 13:27:36.808277       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="22.923563ms"
	I0923 13:27:36.808604       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="66.296µs"
	I0923 13:27:39.020481       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0923 13:27:39.024660       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0923 13:27:39.061606       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0923 13:27:39.063458       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0923 13:27:40.766393       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="23.327574ms"
	I0923 13:27:40.766611       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="94.586µs"
	I0923 13:27:59.929895       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-133262"
	I0923 13:33:06.528580       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-133262"
	I0923 13:36:13.485698       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="6.769µs"
	I0923 13:36:23.612615       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0923 13:36:35.962400       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="7.311µs"
	I0923 13:37:01.772144       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-133262"
	I0923 13:37:17.907444       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="6.663µs"
	
	
	==> kube-proxy [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d] <==
	I0923 13:25:36.937749       1 server_linux.go:66] "Using iptables proxy"
	I0923 13:25:37.338915       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 13:25:37.338986       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 13:25:37.413835       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 13:25:37.413972       1 server_linux.go:169] "Using iptables Proxier"
	I0923 13:25:37.415844       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 13:25:37.416398       1 server.go:483] "Version info" version="v1.31.1"
	I0923 13:25:37.416459       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:25:37.423427       1 config.go:199] "Starting service config controller"
	I0923 13:25:37.423523       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 13:25:37.423586       1 config.go:105] "Starting endpoint slice config controller"
	I0923 13:25:37.423616       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 13:25:37.424042       1 config.go:328] "Starting node config controller"
	I0923 13:25:37.424095       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 13:25:37.584302       1 shared_informer.go:320] Caches are synced for node config
	I0923 13:25:37.584359       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 13:25:37.623656       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09] <==
	W0923 13:25:24.580187       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 13:25:24.582008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:24.580220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 13:25:24.582107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:24.580318       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 13:25:24.582203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:24.580371       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 13:25:24.582354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.418271       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 13:25:25.418450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.587950       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 13:25:25.588075       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.616405       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 13:25:25.616450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.642462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 13:25:25.647975       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.666673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 13:25:25.666824       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.673612       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 13:25:25.673747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.684524       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 13:25:25.684652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.718405       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 13:25:25.718452       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 13:25:27.559102       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 13:37:17 addons-133262 kubelet[1502]: E0923 13:37:17.184631    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098637184386474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:489422,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:37:17 addons-133262 kubelet[1502]: E0923 13:37:17.184670    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098637184386474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:489422,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:37:17 addons-133262 kubelet[1502]: I0923 13:37:17.368489    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2b0a3292-cbd1-4a87-bddf-6234359cdf59-gcp-creds\") pod \"2b0a3292-cbd1-4a87-bddf-6234359cdf59\" (UID: \"2b0a3292-cbd1-4a87-bddf-6234359cdf59\") "
	Sep 23 13:37:17 addons-133262 kubelet[1502]: I0923 13:37:17.368571    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slf7m\" (UniqueName: \"kubernetes.io/projected/2b0a3292-cbd1-4a87-bddf-6234359cdf59-kube-api-access-slf7m\") pod \"2b0a3292-cbd1-4a87-bddf-6234359cdf59\" (UID: \"2b0a3292-cbd1-4a87-bddf-6234359cdf59\") "
	Sep 23 13:37:17 addons-133262 kubelet[1502]: I0923 13:37:17.368976    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b0a3292-cbd1-4a87-bddf-6234359cdf59-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "2b0a3292-cbd1-4a87-bddf-6234359cdf59" (UID: "2b0a3292-cbd1-4a87-bddf-6234359cdf59"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 23 13:37:17 addons-133262 kubelet[1502]: I0923 13:37:17.370591    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b0a3292-cbd1-4a87-bddf-6234359cdf59-kube-api-access-slf7m" (OuterVolumeSpecName: "kube-api-access-slf7m") pod "2b0a3292-cbd1-4a87-bddf-6234359cdf59" (UID: "2b0a3292-cbd1-4a87-bddf-6234359cdf59"). InnerVolumeSpecName "kube-api-access-slf7m". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 13:37:17 addons-133262 kubelet[1502]: I0923 13:37:17.469577    1502 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-slf7m\" (UniqueName: \"kubernetes.io/projected/2b0a3292-cbd1-4a87-bddf-6234359cdf59-kube-api-access-slf7m\") on node \"addons-133262\" DevicePath \"\""
	Sep 23 13:37:17 addons-133262 kubelet[1502]: I0923 13:37:17.469619    1502 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2b0a3292-cbd1-4a87-bddf-6234359cdf59-gcp-creds\") on node \"addons-133262\" DevicePath \"\""
	Sep 23 13:37:17 addons-133262 kubelet[1502]: I0923 13:37:17.915344    1502 scope.go:117] "RemoveContainer" containerID="c564c5ad6a593d0cd8d550d9a0616d5144bdc5b5e9ef3342f33b607acf71b371"
	Sep 23 13:37:17 addons-133262 kubelet[1502]: E0923 13:37:17.915539    1502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-ncv7d_gadget(d19e7839-1016-4e61-ba5e-28b2f0a6c2eb)\"" pod="gadget/gadget-ncv7d" podUID="d19e7839-1016-4e61-ba5e-28b2f0a6c2eb"
	Sep 23 13:37:18 addons-133262 kubelet[1502]: I0923 13:37:18.275057    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfmxx\" (UniqueName: \"kubernetes.io/projected/cb6ceb80-6e9e-4cb0-8229-2ffe7f03b5f8-kube-api-access-jfmxx\") pod \"cb6ceb80-6e9e-4cb0-8229-2ffe7f03b5f8\" (UID: \"cb6ceb80-6e9e-4cb0-8229-2ffe7f03b5f8\") "
	Sep 23 13:37:18 addons-133262 kubelet[1502]: I0923 13:37:18.275120    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj55f\" (UniqueName: \"kubernetes.io/projected/d093e650-6688-49f8-9c46-28a49dd5a974-kube-api-access-qj55f\") pod \"d093e650-6688-49f8-9c46-28a49dd5a974\" (UID: \"d093e650-6688-49f8-9c46-28a49dd5a974\") "
	Sep 23 13:37:18 addons-133262 kubelet[1502]: I0923 13:37:18.277456    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb6ceb80-6e9e-4cb0-8229-2ffe7f03b5f8-kube-api-access-jfmxx" (OuterVolumeSpecName: "kube-api-access-jfmxx") pod "cb6ceb80-6e9e-4cb0-8229-2ffe7f03b5f8" (UID: "cb6ceb80-6e9e-4cb0-8229-2ffe7f03b5f8"). InnerVolumeSpecName "kube-api-access-jfmxx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 13:37:18 addons-133262 kubelet[1502]: I0923 13:37:18.278866    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d093e650-6688-49f8-9c46-28a49dd5a974-kube-api-access-qj55f" (OuterVolumeSpecName: "kube-api-access-qj55f") pod "d093e650-6688-49f8-9c46-28a49dd5a974" (UID: "d093e650-6688-49f8-9c46-28a49dd5a974"). InnerVolumeSpecName "kube-api-access-qj55f". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 13:37:18 addons-133262 kubelet[1502]: I0923 13:37:18.376402    1502 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qj55f\" (UniqueName: \"kubernetes.io/projected/d093e650-6688-49f8-9c46-28a49dd5a974-kube-api-access-qj55f\") on node \"addons-133262\" DevicePath \"\""
	Sep 23 13:37:18 addons-133262 kubelet[1502]: I0923 13:37:18.376438    1502 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jfmxx\" (UniqueName: \"kubernetes.io/projected/cb6ceb80-6e9e-4cb0-8229-2ffe7f03b5f8-kube-api-access-jfmxx\") on node \"addons-133262\" DevicePath \"\""
	Sep 23 13:37:18 addons-133262 kubelet[1502]: I0923 13:37:18.760877    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b0a3292-cbd1-4a87-bddf-6234359cdf59" path="/var/lib/kubelet/pods/2b0a3292-cbd1-4a87-bddf-6234359cdf59/volumes"
	Sep 23 13:37:18 addons-133262 kubelet[1502]: I0923 13:37:18.864992    1502 scope.go:117] "RemoveContainer" containerID="aa4fd929ada1f744c7eeeb47a54a0b43ce6e3332328d568161be3bceacf6d730"
	Sep 23 13:37:18 addons-133262 kubelet[1502]: I0923 13:37:18.918205    1502 scope.go:117] "RemoveContainer" containerID="aa4fd929ada1f744c7eeeb47a54a0b43ce6e3332328d568161be3bceacf6d730"
	Sep 23 13:37:18 addons-133262 kubelet[1502]: E0923 13:37:18.919215    1502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa4fd929ada1f744c7eeeb47a54a0b43ce6e3332328d568161be3bceacf6d730\": container with ID starting with aa4fd929ada1f744c7eeeb47a54a0b43ce6e3332328d568161be3bceacf6d730 not found: ID does not exist" containerID="aa4fd929ada1f744c7eeeb47a54a0b43ce6e3332328d568161be3bceacf6d730"
	Sep 23 13:37:18 addons-133262 kubelet[1502]: I0923 13:37:18.919261    1502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa4fd929ada1f744c7eeeb47a54a0b43ce6e3332328d568161be3bceacf6d730"} err="failed to get container status \"aa4fd929ada1f744c7eeeb47a54a0b43ce6e3332328d568161be3bceacf6d730\": rpc error: code = NotFound desc = could not find container \"aa4fd929ada1f744c7eeeb47a54a0b43ce6e3332328d568161be3bceacf6d730\": container with ID starting with aa4fd929ada1f744c7eeeb47a54a0b43ce6e3332328d568161be3bceacf6d730 not found: ID does not exist"
	Sep 23 13:37:18 addons-133262 kubelet[1502]: I0923 13:37:18.919290    1502 scope.go:117] "RemoveContainer" containerID="ab5f9daf7ef2dca463ed8db6e275e84c0307116d3d80ad74d551e94738f35762"
	Sep 23 13:37:18 addons-133262 kubelet[1502]: I0923 13:37:18.946162    1502 scope.go:117] "RemoveContainer" containerID="ab5f9daf7ef2dca463ed8db6e275e84c0307116d3d80ad74d551e94738f35762"
	Sep 23 13:37:18 addons-133262 kubelet[1502]: E0923 13:37:18.946783    1502 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab5f9daf7ef2dca463ed8db6e275e84c0307116d3d80ad74d551e94738f35762\": container with ID starting with ab5f9daf7ef2dca463ed8db6e275e84c0307116d3d80ad74d551e94738f35762 not found: ID does not exist" containerID="ab5f9daf7ef2dca463ed8db6e275e84c0307116d3d80ad74d551e94738f35762"
	Sep 23 13:37:18 addons-133262 kubelet[1502]: I0923 13:37:18.946818    1502 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab5f9daf7ef2dca463ed8db6e275e84c0307116d3d80ad74d551e94738f35762"} err="failed to get container status \"ab5f9daf7ef2dca463ed8db6e275e84c0307116d3d80ad74d551e94738f35762\": rpc error: code = NotFound desc = could not find container \"ab5f9daf7ef2dca463ed8db6e275e84c0307116d3d80ad74d551e94738f35762\": container with ID starting with ab5f9daf7ef2dca463ed8db6e275e84c0307116d3d80ad74d551e94738f35762 not found: ID does not exist"
	
	
	==> storage-provisioner [846b4d1bcfbe362e097d8174a0b2808c301ad53a9959a5c8577ae8669f7374d8] <==
	I0923 13:26:17.410205       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 13:26:17.427878       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 13:26:17.428011       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 13:26:17.451710       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 13:26:17.452694       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-133262_255eadcb-81cb-4ff4-8832-d04e319c6694!
	I0923 13:26:17.453702       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1f2b24c6-4123-42bd-a56d-cf65e312df77", APIVersion:"v1", ResourceVersion:"901", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-133262_255eadcb-81cb-4ff4-8832-d04e319c6694 became leader
	I0923 13:26:17.552987       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-133262_255eadcb-81cb-4ff4-8832-d04e319c6694!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-133262 -n addons-133262
helpers_test.go:261: (dbg) Run:  kubectl --context addons-133262 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-rxj9z ingress-nginx-admission-patch-lzqrp
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-133262 describe pod busybox ingress-nginx-admission-create-rxj9z ingress-nginx-admission-patch-lzqrp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-133262 describe pod busybox ingress-nginx-admission-create-rxj9z ingress-nginx-admission-patch-lzqrp: exit status 1 (99.545448ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-133262/192.168.49.2
	Start Time:       Mon, 23 Sep 2024 13:28:03 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2xb2r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2xb2r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m17s                   default-scheduler  Successfully assigned default/busybox to addons-133262
	  Normal   Pulling    7m52s (x4 over 9m17s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m52s (x4 over 9m16s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m52s (x4 over 9m16s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m29s (x6 over 9m16s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m10s (x20 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rxj9z" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-lzqrp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-133262 describe pod busybox ingress-nginx-admission-create-rxj9z ingress-nginx-admission-patch-lzqrp: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.85s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (155.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-133262 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-133262 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-133262 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3615ca70-8288-4ccd-a9d8-b769c85dcbaf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3615ca70-8288-4ccd-a9d8-b769c85dcbaf] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003374681s
I0923 13:38:45.727965 2383070 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-133262 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-133262 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.353102699s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context addons-133262 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-133262 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-133262 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-133262 addons disable ingress-dns --alsologtostderr -v=1: (1.814399185s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-133262 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-133262 addons disable ingress --alsologtostderr -v=1: (7.91526961s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-133262
helpers_test.go:235: (dbg) docker inspect addons-133262:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5025a3e562405ccbb5e57022efe59b9bbe643e70c019e4c06b37590b7afd6f95",
	        "Created": "2024-09-23T13:25:04.273986374Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2384322,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T13:25:04.39615577Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
	        "ResolvConfPath": "/var/lib/docker/containers/5025a3e562405ccbb5e57022efe59b9bbe643e70c019e4c06b37590b7afd6f95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5025a3e562405ccbb5e57022efe59b9bbe643e70c019e4c06b37590b7afd6f95/hostname",
	        "HostsPath": "/var/lib/docker/containers/5025a3e562405ccbb5e57022efe59b9bbe643e70c019e4c06b37590b7afd6f95/hosts",
	        "LogPath": "/var/lib/docker/containers/5025a3e562405ccbb5e57022efe59b9bbe643e70c019e4c06b37590b7afd6f95/5025a3e562405ccbb5e57022efe59b9bbe643e70c019e4c06b37590b7afd6f95-json.log",
	        "Name": "/addons-133262",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-133262:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-133262",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a338015ce0d4f39570960bfbc498e21bd3d77cc2352e2ecf45c7a1e6bf2501fc-init/diff:/var/lib/docker/overlay2/cb21b5e82393f0d5264c7db3ef721bc402a1fb078a3835cf5b3c87b0c534f7c3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a338015ce0d4f39570960bfbc498e21bd3d77cc2352e2ecf45c7a1e6bf2501fc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a338015ce0d4f39570960bfbc498e21bd3d77cc2352e2ecf45c7a1e6bf2501fc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a338015ce0d4f39570960bfbc498e21bd3d77cc2352e2ecf45c7a1e6bf2501fc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-133262",
	                "Source": "/var/lib/docker/volumes/addons-133262/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-133262",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-133262",
	                "name.minikube.sigs.k8s.io": "addons-133262",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1741029badc86a71140569cf0476e607610316c0823ed37e11befd21a27df5ad",
	            "SandboxKey": "/var/run/docker/netns/1741029badc8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35734"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35735"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35738"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35736"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35737"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-133262": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "32e42fc489c18023f59643e3f9c8a5aaca44c70cab10ea22839173b8efe7a5b0",
	                    "EndpointID": "f553c0425f96879275a6868c4915333e0a9bf18829e579f5bd5a87a9769b40ec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-133262",
	                        "5025a3e56240"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-133262 -n addons-133262
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-133262 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-133262 logs -n 25: (1.495748301s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-801108                                                                     | download-only-801108   | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	| delete  | -p download-only-496865                                                                     | download-only-496865   | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	| start   | --download-only -p                                                                          | download-docker-237977 | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC |                     |
	|         | download-docker-237977                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-237977                                                                   | download-docker-237977 | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-127301   | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC |                     |
	|         | binary-mirror-127301                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:42465                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-127301                                                                     | binary-mirror-127301   | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	| addons  | enable dashboard -p                                                                         | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC |                     |
	|         | addons-133262                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC |                     |
	|         | addons-133262                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-133262 --wait=true                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:28 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-133262 addons disable                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:36 UTC | 23 Sep 24 13:36 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:36 UTC | 23 Sep 24 13:36 UTC |
	|         | -p addons-133262                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-133262 ssh cat                                                                       | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:36 UTC | 23 Sep 24 13:36 UTC |
	|         | /opt/local-path-provisioner/pvc-ba93c3ca-4ceb-4c2d-8d75-76b896b20b5e_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-133262 addons disable                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:36 UTC | 23 Sep 24 13:37 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-133262 ip                                                                            | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:37 UTC | 23 Sep 24 13:37 UTC |
	| addons  | addons-133262 addons disable                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:37 UTC | 23 Sep 24 13:37 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:37 UTC | 23 Sep 24 13:37 UTC |
	|         | -p addons-133262                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:37 UTC | 23 Sep 24 13:37 UTC |
	|         | addons-133262                                                                               |                        |         |         |                     |                     |
	| addons  | addons-133262 addons disable                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:37 UTC | 23 Sep 24 13:37 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-133262 addons                                                                        | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:38 UTC | 23 Sep 24 13:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-133262 addons                                                                        | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:38 UTC | 23 Sep 24 13:38 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:38 UTC | 23 Sep 24 13:38 UTC |
	|         | addons-133262                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-133262 ssh curl -s                                                                   | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:38 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-133262 ip                                                                            | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:40 UTC | 23 Sep 24 13:40 UTC |
	| addons  | addons-133262 addons disable                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:40 UTC | 23 Sep 24 13:40 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-133262 addons disable                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:40 UTC | 23 Sep 24 13:41 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:24:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:24:40.364478 2383828 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:24:40.364687 2383828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:24:40.364718 2383828 out.go:358] Setting ErrFile to fd 2...
	I0923 13:24:40.364739 2383828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:24:40.365007 2383828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-2377681/.minikube/bin
	I0923 13:24:40.365476 2383828 out.go:352] Setting JSON to false
	I0923 13:24:40.366420 2383828 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":54423,"bootTime":1727043457,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0923 13:24:40.366518 2383828 start.go:139] virtualization:  
	I0923 13:24:40.368697 2383828 out.go:177] * [addons-133262] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 13:24:40.370555 2383828 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:24:40.370655 2383828 notify.go:220] Checking for updates...
	I0923 13:24:40.373762 2383828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:24:40.375645 2383828 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-2377681/kubeconfig
	I0923 13:24:40.376840 2383828 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-2377681/.minikube
	I0923 13:24:40.378275 2383828 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 13:24:40.379541 2383828 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:24:40.380976 2383828 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:24:40.425606 2383828 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 13:24:40.425734 2383828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:24:40.478465 2383828 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 13:24:40.468583329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:24:40.478577 2383828 docker.go:318] overlay module found
	I0923 13:24:40.480220 2383828 out.go:177] * Using the docker driver based on user configuration
	I0923 13:24:40.481509 2383828 start.go:297] selected driver: docker
	I0923 13:24:40.481524 2383828 start.go:901] validating driver "docker" against <nil>
	I0923 13:24:40.481538 2383828 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:24:40.482184 2383828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:24:40.531533 2383828 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 13:24:40.521410022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:24:40.531752 2383828 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 13:24:40.531987 2383828 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:24:40.533357 2383828 out.go:177] * Using Docker driver with root privileges
	I0923 13:24:40.534774 2383828 cni.go:84] Creating CNI manager for ""
	I0923 13:24:40.534836 2383828 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 13:24:40.534848 2383828 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 13:24:40.534944 2383828 start.go:340] cluster config:
	{Name:addons-133262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-133262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:24:40.536381 2383828 out.go:177] * Starting "addons-133262" primary control-plane node in "addons-133262" cluster
	I0923 13:24:40.537851 2383828 cache.go:121] Beginning downloading kic base image for docker with crio
	I0923 13:24:40.539216 2383828 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 13:24:40.540387 2383828 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:24:40.540468 2383828 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0923 13:24:40.540480 2383828 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 13:24:40.540486 2383828 cache.go:56] Caching tarball of preloaded images
	I0923 13:24:40.540576 2383828 preload.go:172] Found /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0923 13:24:40.540587 2383828 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 13:24:40.540932 2383828 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/config.json ...
	I0923 13:24:40.540964 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/config.json: {Name:mk0f11192ff62aa19eaf7345f3142fd23df23f12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:24:40.557194 2383828 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 13:24:40.557302 2383828 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 13:24:40.557321 2383828 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 13:24:40.557327 2383828 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 13:24:40.557334 2383828 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 13:24:40.557340 2383828 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0923 13:24:57.517135 2383828 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0923 13:24:57.517177 2383828 cache.go:194] Successfully downloaded all kic artifacts
	I0923 13:24:57.517208 2383828 start.go:360] acquireMachinesLock for addons-133262: {Name:mkbc92a211fc9b19084838acda6ec6db74ac2de5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:24:57.517340 2383828 start.go:364] duration metric: took 100.034µs to acquireMachinesLock for "addons-133262"
	I0923 13:24:57.517372 2383828 start.go:93] Provisioning new machine with config: &{Name:addons-133262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-133262 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 13:24:57.517487 2383828 start.go:125] createHost starting for "" (driver="docker")
	I0923 13:24:57.519552 2383828 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0923 13:24:57.519788 2383828 start.go:159] libmachine.API.Create for "addons-133262" (driver="docker")
	I0923 13:24:57.519822 2383828 client.go:168] LocalClient.Create starting
	I0923 13:24:57.519927 2383828 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem
	I0923 13:24:57.928803 2383828 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem
	I0923 13:24:58.062903 2383828 cli_runner.go:164] Run: docker network inspect addons-133262 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 13:24:58.077185 2383828 cli_runner.go:211] docker network inspect addons-133262 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 13:24:58.077288 2383828 network_create.go:284] running [docker network inspect addons-133262] to gather additional debugging logs...
	I0923 13:24:58.077309 2383828 cli_runner.go:164] Run: docker network inspect addons-133262
	W0923 13:24:58.092464 2383828 cli_runner.go:211] docker network inspect addons-133262 returned with exit code 1
	I0923 13:24:58.092500 2383828 network_create.go:287] error running [docker network inspect addons-133262]: docker network inspect addons-133262: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-133262 not found
	I0923 13:24:58.092521 2383828 network_create.go:289] output of [docker network inspect addons-133262]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-133262 not found
	
	** /stderr **
	I0923 13:24:58.092643 2383828 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 13:24:58.108933 2383828 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001781250}
	I0923 13:24:58.108976 2383828 network_create.go:124] attempt to create docker network addons-133262 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0923 13:24:58.109032 2383828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-133262 addons-133262
	I0923 13:24:58.181902 2383828 network_create.go:108] docker network addons-133262 192.168.49.0/24 created
	I0923 13:24:58.181937 2383828 kic.go:121] calculated static IP "192.168.49.2" for the "addons-133262" container
	I0923 13:24:58.182008 2383828 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 13:24:58.195905 2383828 cli_runner.go:164] Run: docker volume create addons-133262 --label name.minikube.sigs.k8s.io=addons-133262 --label created_by.minikube.sigs.k8s.io=true
	I0923 13:24:58.210686 2383828 oci.go:103] Successfully created a docker volume addons-133262
	I0923 13:24:58.210778 2383828 cli_runner.go:164] Run: docker run --rm --name addons-133262-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-133262 --entrypoint /usr/bin/test -v addons-133262:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 13:25:00.216144 2383828 cli_runner.go:217] Completed: docker run --rm --name addons-133262-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-133262 --entrypoint /usr/bin/test -v addons-133262:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (2.005304311s)
	I0923 13:25:00.216182 2383828 oci.go:107] Successfully prepared a docker volume addons-133262
	I0923 13:25:00.216215 2383828 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:25:00.216236 2383828 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 13:25:00.216350 2383828 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-133262:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 13:25:04.208435 2383828 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-133262:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.992035598s)
	I0923 13:25:04.208472 2383828 kic.go:203] duration metric: took 3.992232385s to extract preloaded images to volume ...
	W0923 13:25:04.208630 2383828 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0923 13:25:04.208755 2383828 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 13:25:04.259929 2383828 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-133262 --name addons-133262 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-133262 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-133262 --network addons-133262 --ip 192.168.49.2 --volume addons-133262:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0923 13:25:04.567167 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Running}}
	I0923 13:25:04.589203 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:04.612088 2383828 cli_runner.go:164] Run: docker exec addons-133262 stat /var/lib/dpkg/alternatives/iptables
	I0923 13:25:04.695578 2383828 oci.go:144] the created container "addons-133262" has a running status.
	I0923 13:25:04.695609 2383828 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa...
	I0923 13:25:05.137525 2383828 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 13:25:05.169488 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:05.191833 2383828 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 13:25:05.191853 2383828 kic_runner.go:114] Args: [docker exec --privileged addons-133262 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 13:25:05.256602 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:05.283280 2383828 machine.go:93] provisionDockerMachine start ...
	I0923 13:25:05.283429 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:05.305554 2383828 main.go:141] libmachine: Using SSH client type: native
	I0923 13:25:05.305832 2383828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35734 <nil> <nil>}
	I0923 13:25:05.305849 2383828 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:25:05.485763 2383828 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-133262
	
	I0923 13:25:05.485787 2383828 ubuntu.go:169] provisioning hostname "addons-133262"
	I0923 13:25:05.485852 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:05.505809 2383828 main.go:141] libmachine: Using SSH client type: native
	I0923 13:25:05.506049 2383828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35734 <nil> <nil>}
	I0923 13:25:05.506062 2383828 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-133262 && echo "addons-133262" | sudo tee /etc/hostname
	I0923 13:25:05.661069 2383828 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-133262
	
	I0923 13:25:05.661155 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:05.688059 2383828 main.go:141] libmachine: Using SSH client type: native
	I0923 13:25:05.688338 2383828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35734 <nil> <nil>}
	I0923 13:25:05.688355 2383828 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-133262' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-133262/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-133262' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:25:05.822488 2383828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:25:05.822526 2383828 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19690-2377681/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-2377681/.minikube}
	I0923 13:25:05.822550 2383828 ubuntu.go:177] setting up certificates
	I0923 13:25:05.822561 2383828 provision.go:84] configureAuth start
	I0923 13:25:05.822632 2383828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-133262
	I0923 13:25:05.839354 2383828 provision.go:143] copyHostCerts
	I0923 13:25:05.839446 2383828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.pem (1078 bytes)
	I0923 13:25:05.839573 2383828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-2377681/.minikube/cert.pem (1123 bytes)
	I0923 13:25:05.839636 2383828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-2377681/.minikube/key.pem (1679 bytes)
	I0923 13:25:05.839689 2383828 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca-key.pem org=jenkins.addons-133262 san=[127.0.0.1 192.168.49.2 addons-133262 localhost minikube]
	I0923 13:25:06.495243 2383828 provision.go:177] copyRemoteCerts
	I0923 13:25:06.495317 2383828 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:25:06.495387 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:06.514794 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:06.612607 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 13:25:06.638504 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 13:25:06.663621 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 13:25:06.689379 2383828 provision.go:87] duration metric: took 866.80454ms to configureAuth
	I0923 13:25:06.689451 2383828 ubuntu.go:193] setting minikube options for container-runtime
	I0923 13:25:06.689667 2383828 config.go:182] Loaded profile config "addons-133262": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:25:06.689785 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:06.707118 2383828 main.go:141] libmachine: Using SSH client type: native
	I0923 13:25:06.707369 2383828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35734 <nil> <nil>}
	I0923 13:25:06.707392 2383828 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 13:25:06.938544 2383828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 13:25:06.938576 2383828 machine.go:96] duration metric: took 1.655268945s to provisionDockerMachine
	I0923 13:25:06.938587 2383828 client.go:171] duration metric: took 9.418759041s to LocalClient.Create
	I0923 13:25:06.938600 2383828 start.go:167] duration metric: took 9.418812767s to libmachine.API.Create "addons-133262"
	I0923 13:25:06.938608 2383828 start.go:293] postStartSetup for "addons-133262" (driver="docker")
	I0923 13:25:06.938620 2383828 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:25:06.938686 2383828 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:25:06.938731 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:06.956302 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:07.055692 2383828 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:25:07.058884 2383828 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 13:25:07.058918 2383828 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 13:25:07.058931 2383828 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 13:25:07.058938 2383828 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 13:25:07.058953 2383828 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-2377681/.minikube/addons for local assets ...
	I0923 13:25:07.059040 2383828 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-2377681/.minikube/files for local assets ...
	I0923 13:25:07.059075 2383828 start.go:296] duration metric: took 120.460907ms for postStartSetup
	I0923 13:25:07.059396 2383828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-133262
	I0923 13:25:07.076417 2383828 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/config.json ...
	I0923 13:25:07.076731 2383828 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:25:07.076792 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:07.093453 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:07.183072 2383828 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 13:25:07.187501 2383828 start.go:128] duration metric: took 9.669998429s to createHost
	I0923 13:25:07.187526 2383828 start.go:83] releasing machines lock for "addons-133262", held for 9.670170929s
	I0923 13:25:07.187597 2383828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-133262
	I0923 13:25:07.203630 2383828 ssh_runner.go:195] Run: cat /version.json
	I0923 13:25:07.203673 2383828 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 13:25:07.203683 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:07.203744 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:07.223131 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:07.234414 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:07.436803 2383828 ssh_runner.go:195] Run: systemctl --version
	I0923 13:25:07.441288 2383828 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 13:25:07.583937 2383828 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 13:25:07.588356 2383828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:25:07.611186 2383828 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0923 13:25:07.611279 2383828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:25:07.642594 2383828 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0923 13:25:07.642666 2383828 start.go:495] detecting cgroup driver to use...
	I0923 13:25:07.642718 2383828 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 13:25:07.642799 2383828 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 13:25:07.659158 2383828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:25:07.670791 2383828 docker.go:217] disabling cri-docker service (if available) ...
	I0923 13:25:07.670915 2383828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 13:25:07.685963 2383828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 13:25:07.700410 2383828 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 13:25:07.793728 2383828 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 13:25:07.888156 2383828 docker.go:233] disabling docker service ...
	I0923 13:25:07.888238 2383828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 13:25:07.908488 2383828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 13:25:07.920988 2383828 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 13:25:08.011802 2383828 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 13:25:08.116061 2383828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 13:25:08.127456 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:25:08.144788 2383828 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 13:25:08.144859 2383828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.155741 2383828 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 13:25:08.155815 2383828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.166342 2383828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.176318 2383828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.186297 2383828 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:25:08.195794 2383828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.205821 2383828 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.222517 2383828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.232461 2383828 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:25:08.241712 2383828 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:25:08.250384 2383828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:25:08.337916 2383828 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 13:25:08.443675 2383828 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 13:25:08.443763 2383828 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 13:25:08.447871 2383828 start.go:563] Will wait 60s for crictl version
	I0923 13:25:08.447976 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:25:08.451632 2383828 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:25:08.495719 2383828 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0923 13:25:08.495829 2383828 ssh_runner.go:195] Run: crio --version
	I0923 13:25:08.534184 2383828 ssh_runner.go:195] Run: crio --version
	I0923 13:25:08.574119 2383828 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0923 13:25:08.575986 2383828 cli_runner.go:164] Run: docker network inspect addons-133262 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 13:25:08.591880 2383828 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 13:25:08.595405 2383828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:25:08.606218 2383828 kubeadm.go:883] updating cluster {Name:addons-133262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-133262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 13:25:08.606418 2383828 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:25:08.606486 2383828 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:25:08.683043 2383828 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 13:25:08.683069 2383828 crio.go:433] Images already preloaded, skipping extraction
	I0923 13:25:08.683126 2383828 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:25:08.718285 2383828 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 13:25:08.718324 2383828 cache_images.go:84] Images are preloaded, skipping loading
	I0923 13:25:08.718333 2383828 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0923 13:25:08.718438 2383828 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-133262 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-133262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:25:08.718527 2383828 ssh_runner.go:195] Run: crio config
	I0923 13:25:08.764315 2383828 cni.go:84] Creating CNI manager for ""
	I0923 13:25:08.764337 2383828 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 13:25:08.764348 2383828 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 13:25:08.764370 2383828 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-133262 NodeName:addons-133262 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 13:25:08.764526 2383828 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-133262"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 13:25:08.764603 2383828 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:25:08.773406 2383828 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:25:08.773479 2383828 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 13:25:08.782241 2383828 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0923 13:25:08.800013 2383828 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:25:08.818404 2383828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0923 13:25:08.836149 2383828 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 13:25:08.839708 2383828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:25:08.850762 2383828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:25:08.932670 2383828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:25:08.946645 2383828 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262 for IP: 192.168.49.2
	I0923 13:25:08.946664 2383828 certs.go:194] generating shared ca certs ...
	I0923 13:25:08.946681 2383828 certs.go:226] acquiring lock for ca certs: {Name:mka74fca5f9586bfec26165232a0abe6b9527b85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:08.946856 2383828 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key
	I0923 13:25:09.534535 2383828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt ...
	I0923 13:25:09.534569 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt: {Name:mkd6669f44b9a5690ab69d1191d9d59bfa475998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:09.534806 2383828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key ...
	I0923 13:25:09.534822 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key: {Name:mkcb9f518a9706e806f1e3ce2b21f17dd1ea4af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:09.535463 2383828 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key
	I0923 13:25:09.881577 2383828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.crt ...
	I0923 13:25:09.881615 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.crt: {Name:mkfe3b6cdbf84ec160efdee677ace7ad97157d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:09.881813 2383828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key ...
	I0923 13:25:09.881828 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key: {Name:mkfb51a840155a14a8cc8bb45048279f9c0b2777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:09.881912 2383828 certs.go:256] generating profile certs ...
	I0923 13:25:09.882006 2383828 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.key
	I0923 13:25:09.882034 2383828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt with IP's: []
	I0923 13:25:10.566644 2383828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt ...
	I0923 13:25:10.566674 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: {Name:mkd81ca15f11b2786974e7876e3c9aed3e2d4234 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:10.567469 2383828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.key ...
	I0923 13:25:10.567490 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.key: {Name:mk6021386003345160ab870bf118db0d5b101e3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:10.567623 2383828 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.key.5c5d0912
	I0923 13:25:10.567648 2383828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.crt.5c5d0912 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0923 13:25:10.852497 2383828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.crt.5c5d0912 ...
	I0923 13:25:10.852533 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.crt.5c5d0912: {Name:mk7f27ae99622d8c8fa852d7ef4a1bd4d1377cc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:10.853247 2383828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.key.5c5d0912 ...
	I0923 13:25:10.853270 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.key.5c5d0912: {Name:mke5687c64d611e598a2d4dfa2e1b457cefad09e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:10.853768 2383828 certs.go:381] copying /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.crt.5c5d0912 -> /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.crt
	I0923 13:25:10.853857 2383828 certs.go:385] copying /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.key.5c5d0912 -> /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.key
	I0923 13:25:10.853920 2383828 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.key
	I0923 13:25:10.853944 2383828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.crt with IP's: []
	I0923 13:25:11.253287 2383828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.crt ...
	I0923 13:25:11.253320 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.crt: {Name:mkec361222a939c4fff7d39836686e89c78445d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:11.253510 2383828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.key ...
	I0923 13:25:11.253524 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.key: {Name:mkd82ad2e44c4406a63509e86866460eeda368df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:11.253710 2383828 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 13:25:11.253753 2383828 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem (1078 bytes)
	I0923 13:25:11.253784 2383828 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem (1123 bytes)
	I0923 13:25:11.253812 2383828 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem (1679 bytes)
	I0923 13:25:11.254465 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:25:11.280459 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 13:25:11.308504 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:25:11.341407 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:25:11.365448 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 13:25:11.390204 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 13:25:11.414590 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 13:25:11.439501 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 13:25:11.463335 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:25:11.488243 2383828 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 13:25:11.506147 2383828 ssh_runner.go:195] Run: openssl version
	I0923 13:25:11.511692 2383828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:25:11.521261 2383828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:25:11.524826 2383828 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 13:25 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:25:11.524943 2383828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:25:11.532134 2383828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:25:11.541360 2383828 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:25:11.544583 2383828 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 13:25:11.544637 2383828 kubeadm.go:392] StartCluster: {Name:addons-133262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-133262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:25:11.544720 2383828 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 13:25:11.544790 2383828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 13:25:11.581094 2383828 cri.go:89] found id: ""
	I0923 13:25:11.581187 2383828 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 13:25:11.590237 2383828 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 13:25:11.599295 2383828 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0923 13:25:11.599391 2383828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 13:25:11.608400 2383828 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 13:25:11.608424 2383828 kubeadm.go:157] found existing configuration files:
	
	I0923 13:25:11.608478 2383828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 13:25:11.617384 2383828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 13:25:11.617458 2383828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 13:25:11.626442 2383828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 13:25:11.635222 2383828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 13:25:11.635294 2383828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 13:25:11.643984 2383828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 13:25:11.653034 2383828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 13:25:11.653121 2383828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 13:25:11.661943 2383828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 13:25:11.670520 2383828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 13:25:11.670582 2383828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 13:25:11.678902 2383828 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0923 13:25:11.719171 2383828 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 13:25:11.719491 2383828 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 13:25:11.740162 2383828 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0923 13:25:11.740239 2383828 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0923 13:25:11.740288 2383828 kubeadm.go:310] OS: Linux
	I0923 13:25:11.740344 2383828 kubeadm.go:310] CGROUPS_CPU: enabled
	I0923 13:25:11.740396 2383828 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0923 13:25:11.740445 2383828 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0923 13:25:11.740496 2383828 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0923 13:25:11.740549 2383828 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0923 13:25:11.740599 2383828 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0923 13:25:11.740647 2383828 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0923 13:25:11.740698 2383828 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0923 13:25:11.740747 2383828 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0923 13:25:11.804353 2383828 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 13:25:11.804468 2383828 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 13:25:11.804565 2383828 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 13:25:11.811498 2383828 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 13:25:11.813835 2383828 out.go:235]   - Generating certificates and keys ...
	I0923 13:25:11.814031 2383828 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 13:25:11.814147 2383828 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 13:25:12.062735 2383828 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 13:25:12.591731 2383828 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 13:25:13.268376 2383828 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 13:25:13.777588 2383828 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 13:25:14.367839 2383828 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 13:25:14.368150 2383828 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-133262 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 13:25:14.571927 2383828 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 13:25:14.572261 2383828 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-133262 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 13:25:14.938024 2383828 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 13:25:15.818972 2383828 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 13:25:16.397788 2383828 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 13:25:16.398106 2383828 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 13:25:16.811849 2383828 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 13:25:17.440724 2383828 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 13:25:18.228845 2383828 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 13:25:18.373394 2383828 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 13:25:18.887331 2383828 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 13:25:18.888146 2383828 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 13:25:18.891236 2383828 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 13:25:18.893066 2383828 out.go:235]   - Booting up control plane ...
	I0923 13:25:18.893163 2383828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 13:25:18.893238 2383828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 13:25:18.894026 2383828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 13:25:18.904186 2383828 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 13:25:18.910454 2383828 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 13:25:18.910511 2383828 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 13:25:19.004454 2383828 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 13:25:19.004576 2383828 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 13:25:20.505668 2383828 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501072601s
	I0923 13:25:20.505759 2383828 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 13:25:26.007712 2383828 kubeadm.go:310] [api-check] The API server is healthy after 5.502311988s
	I0923 13:25:26.031158 2383828 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 13:25:26.046565 2383828 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 13:25:26.076539 2383828 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 13:25:26.076736 2383828 kubeadm.go:310] [mark-control-plane] Marking the node addons-133262 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 13:25:26.087778 2383828 kubeadm.go:310] [bootstrap-token] Using token: kkrgrl.3o8iief7llcjzdwt
	I0923 13:25:26.090470 2383828 out.go:235]   - Configuring RBAC rules ...
	I0923 13:25:26.090609 2383828 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 13:25:26.096407 2383828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 13:25:26.106960 2383828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 13:25:26.110745 2383828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 13:25:26.114782 2383828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 13:25:26.119709 2383828 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 13:25:26.414947 2383828 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 13:25:26.846545 2383828 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 13:25:27.414986 2383828 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 13:25:27.416208 2383828 kubeadm.go:310] 
	I0923 13:25:27.416286 2383828 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 13:25:27.416296 2383828 kubeadm.go:310] 
	I0923 13:25:27.416373 2383828 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 13:25:27.416383 2383828 kubeadm.go:310] 
	I0923 13:25:27.416408 2383828 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 13:25:27.416469 2383828 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 13:25:27.416523 2383828 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 13:25:27.416531 2383828 kubeadm.go:310] 
	I0923 13:25:27.416593 2383828 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 13:25:27.416602 2383828 kubeadm.go:310] 
	I0923 13:25:27.416649 2383828 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 13:25:27.416657 2383828 kubeadm.go:310] 
	I0923 13:25:27.416707 2383828 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 13:25:27.416784 2383828 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 13:25:27.416855 2383828 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 13:25:27.416864 2383828 kubeadm.go:310] 
	I0923 13:25:27.416947 2383828 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 13:25:27.417026 2383828 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 13:25:27.417034 2383828 kubeadm.go:310] 
	I0923 13:25:27.417117 2383828 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kkrgrl.3o8iief7llcjzdwt \
	I0923 13:25:27.417221 2383828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc25ddfa50091362c7bfdbe09ed12c0b94b944390ba1bf979075d78a22051d17 \
	I0923 13:25:27.417246 2383828 kubeadm.go:310] 	--control-plane 
	I0923 13:25:27.417251 2383828 kubeadm.go:310] 
	I0923 13:25:27.417334 2383828 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 13:25:27.417344 2383828 kubeadm.go:310] 
	I0923 13:25:27.417424 2383828 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kkrgrl.3o8iief7llcjzdwt \
	I0923 13:25:27.417529 2383828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc25ddfa50091362c7bfdbe09ed12c0b94b944390ba1bf979075d78a22051d17 
	I0923 13:25:27.421442 2383828 kubeadm.go:310] W0923 13:25:11.715767    1187 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:25:27.421763 2383828 kubeadm.go:310] W0923 13:25:11.716771    1187 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:25:27.421999 2383828 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0923 13:25:27.422114 2383828 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 13:25:27.422211 2383828 cni.go:84] Creating CNI manager for ""
	I0923 13:25:27.422223 2383828 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 13:25:27.424992 2383828 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 13:25:27.427655 2383828 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 13:25:27.434913 2383828 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 13:25:27.434938 2383828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 13:25:27.453393 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 13:25:27.737776 2383828 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 13:25:27.737920 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:27.738003 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-133262 minikube.k8s.io/updated_at=2024_09_23T13_25_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=addons-133262 minikube.k8s.io/primary=true
	I0923 13:25:27.874905 2383828 ops.go:34] apiserver oom_adj: -16
	I0923 13:25:27.875025 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:28.375543 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:28.875477 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:29.375599 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:29.875835 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:30.375081 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:30.876003 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:31.375136 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:31.875141 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:31.987664 2383828 kubeadm.go:1113] duration metric: took 4.24984179s to wait for elevateKubeSystemPrivileges
	I0923 13:25:31.987703 2383828 kubeadm.go:394] duration metric: took 20.443068903s to StartCluster
	I0923 13:25:31.987722 2383828 settings.go:142] acquiring lock: {Name:mkec0ac22c7afe2712cd8676389ce937f473d18b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:31.987847 2383828 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19690-2377681/kubeconfig
	I0923 13:25:31.988235 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/kubeconfig: {Name:mk1c3c49c69db07ab1c6462bef79c6f07c9c4b4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:31.988441 2383828 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 13:25:31.988585 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 13:25:31.988829 2383828 config.go:182] Loaded profile config "addons-133262": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:25:31.988864 2383828 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 13:25:31.988948 2383828 addons.go:69] Setting yakd=true in profile "addons-133262"
	I0923 13:25:31.988966 2383828 addons.go:234] Setting addon yakd=true in "addons-133262"
	I0923 13:25:31.988994 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:31.989510 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.989971 2383828 addons.go:69] Setting cloud-spanner=true in profile "addons-133262"
	I0923 13:25:31.989993 2383828 addons.go:234] Setting addon cloud-spanner=true in "addons-133262"
	I0923 13:25:31.990019 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:31.990092 2383828 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-133262"
	I0923 13:25:31.990110 2383828 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-133262"
	I0923 13:25:31.990135 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:31.990504 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.990578 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.994496 2383828 addons.go:69] Setting registry=true in profile "addons-133262"
	I0923 13:25:31.994563 2383828 addons.go:234] Setting addon registry=true in "addons-133262"
	I0923 13:25:31.994616 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:31.995146 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.995279 2383828 addons.go:69] Setting storage-provisioner=true in profile "addons-133262"
	I0923 13:25:31.996290 2383828 addons.go:234] Setting addon storage-provisioner=true in "addons-133262"
	I0923 13:25:31.996326 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:31.996775 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.999264 2383828 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-133262"
	I0923 13:25:31.999371 2383828 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-133262"
	I0923 13:25:31.999947 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.005578 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.999500 2383828 addons.go:69] Setting default-storageclass=true in profile "addons-133262"
	I0923 13:25:32.007965 2383828 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-133262"
	I0923 13:25:32.008523 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.995298 2383828 addons.go:69] Setting volcano=true in profile "addons-133262"
	I0923 13:25:32.012948 2383828 addons.go:234] Setting addon volcano=true in "addons-133262"
	I0923 13:25:32.013004 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.013497 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.995305 2383828 addons.go:69] Setting volumesnapshots=true in profile "addons-133262"
	I0923 13:25:32.028502 2383828 addons.go:234] Setting addon volumesnapshots=true in "addons-133262"
	I0923 13:25:32.028573 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.029136 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.999508 2383828 addons.go:69] Setting gcp-auth=true in profile "addons-133262"
	I0923 13:25:32.052306 2383828 mustload.go:65] Loading cluster: addons-133262
	I0923 13:25:32.052517 2383828 config.go:182] Loaded profile config "addons-133262": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:25:32.052788 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.999514 2383828 addons.go:69] Setting ingress=true in profile "addons-133262"
	I0923 13:25:32.070953 2383828 addons.go:234] Setting addon ingress=true in "addons-133262"
	I0923 13:25:32.071007 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.071476 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.999518 2383828 addons.go:69] Setting ingress-dns=true in profile "addons-133262"
	I0923 13:25:32.092899 2383828 addons.go:234] Setting addon ingress-dns=true in "addons-133262"
	I0923 13:25:32.092953 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.093443 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:32.109354 2383828 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 13:25:32.114598 2383828 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 13:25:32.114680 2383828 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 13:25:32.114765 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:31.999521 2383828 addons.go:69] Setting inspektor-gadget=true in profile "addons-133262"
	I0923 13:25:32.118436 2383828 addons.go:234] Setting addon inspektor-gadget=true in "addons-133262"
	I0923 13:25:32.118545 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:31.999525 2383828 addons.go:69] Setting metrics-server=true in profile "addons-133262"
	I0923 13:25:32.121769 2383828 addons.go:234] Setting addon metrics-server=true in "addons-133262"
	I0923 13:25:32.121814 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.122333 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.999534 2383828 out.go:177] * Verifying Kubernetes components...
	I0923 13:25:32.135133 2383828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:25:31.995291 2383828 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-133262"
	I0923 13:25:32.135493 2383828 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-133262"
	I0923 13:25:32.135848 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:32.160968 2383828 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 13:25:32.165055 2383828 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 13:25:32.165077 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 13:25:32.165152 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.202702 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 13:25:32.205673 2383828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 13:25:32.205756 2383828 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 13:25:32.205880 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.211247 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 13:25:32.214845 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 13:25:32.219909 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 13:25:32.222637 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 13:25:32.242770 2383828 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 13:25:32.250532 2383828 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 13:25:32.250557 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 13:25:32.250646 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.262838 2383828 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 13:25:32.263472 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	W0923 13:25:32.266501 2383828 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0923 13:25:32.277288 2383828 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 13:25:32.277562 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 13:25:32.277651 2383828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 13:25:32.279781 2383828 addons.go:234] Setting addon default-storageclass=true in "addons-133262"
	I0923 13:25:32.279819 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.282986 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:32.285904 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 13:25:32.289027 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 13:25:32.289071 2383828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 13:25:32.289082 2383828 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 13:25:32.289194 2383828 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 13:25:32.299635 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 13:25:32.299715 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.317274 2383828 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 13:25:32.317462 2383828 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-133262"
	I0923 13:25:32.317498 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.317931 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:32.318077 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 13:25:32.319694 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.325032 2383828 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 13:25:32.325087 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 13:25:32.325170 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.359357 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 13:25:32.361120 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 13:25:32.361147 2383828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 13:25:32.361222 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.361396 2383828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 13:25:32.365605 2383828 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 13:25:32.365642 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 13:25:32.365710 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.398727 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.402381 2383828 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 13:25:32.402408 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 13:25:32.402473 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.417359 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.428109 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.436811 2383828 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 13:25:32.439528 2383828 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 13:25:32.439555 2383828 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 13:25:32.439632 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.506455 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.521998 2383828 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 13:25:32.525970 2383828 out.go:177]   - Using image docker.io/busybox:stable
	I0923 13:25:32.529488 2383828 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 13:25:32.529517 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 13:25:32.529582 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.529774 2383828 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 13:25:32.532730 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 13:25:32.532755 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 13:25:32.532824 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.540460 2383828 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 13:25:32.540480 2383828 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 13:25:32.540539 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.540761 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.544187 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.566704 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.583136 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.588518 2383828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:25:32.619886 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.656566 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.657174 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.665769 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.672132 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	W0923 13:25:32.672854 2383828 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 13:25:32.672878 2383828 retry.go:31] will retry after 251.380216ms: ssh: handshake failed: EOF
	I0923 13:25:32.834603 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 13:25:32.952650 2383828 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 13:25:32.952726 2383828 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 13:25:32.958869 2383828 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 13:25:32.958945 2383828 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 13:25:32.981074 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 13:25:33.003665 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 13:25:33.003753 2383828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 13:25:33.020302 2383828 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 13:25:33.020395 2383828 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 13:25:33.064932 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 13:25:33.071188 2383828 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 13:25:33.071266 2383828 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 13:25:33.093040 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 13:25:33.096895 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 13:25:33.116925 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 13:25:33.118205 2383828 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 13:25:33.118262 2383828 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 13:25:33.127649 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 13:25:33.151493 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 13:25:33.151517 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 13:25:33.173138 2383828 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 13:25:33.173162 2383828 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 13:25:33.186149 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 13:25:33.186172 2383828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 13:25:33.202184 2383828 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 13:25:33.202204 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 13:25:33.247710 2383828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 13:25:33.247785 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 13:25:33.273067 2383828 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 13:25:33.273142 2383828 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 13:25:33.288177 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 13:25:33.288259 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 13:25:33.305420 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 13:25:33.305494 2383828 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 13:25:33.353173 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 13:25:33.368265 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 13:25:33.368342 2383828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 13:25:33.437059 2383828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 13:25:33.437132 2383828 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 13:25:33.440876 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 13:25:33.440949 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 13:25:33.449345 2383828 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 13:25:33.449418 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 13:25:33.473562 2383828 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 13:25:33.473637 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 13:25:33.523594 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 13:25:33.523675 2383828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 13:25:33.583312 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 13:25:33.613866 2383828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 13:25:33.613944 2383828 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 13:25:33.617882 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 13:25:33.617946 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 13:25:33.652467 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 13:25:33.681957 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 13:25:33.682035 2383828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 13:25:33.690387 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 13:25:33.710624 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 13:25:33.710702 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 13:25:33.780743 2383828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 13:25:33.780817 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 13:25:33.815507 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 13:25:33.815588 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 13:25:33.857088 2383828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 13:25:33.857166 2383828 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 13:25:33.918017 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 13:25:33.918092 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 13:25:33.929357 2383828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 13:25:33.929432 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 13:25:33.979747 2383828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 13:25:33.979822 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 13:25:33.983608 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 13:25:34.037007 2383828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 13:25:34.037089 2383828 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 13:25:34.151213 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 13:25:35.781487 2383828 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.192933328s)
	I0923 13:25:35.782545 2383828 node_ready.go:35] waiting up to 6m0s for node "addons-133262" to be "Ready" ...
	I0923 13:25:35.782865 2383828 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.496873728s)
	I0923 13:25:35.782924 2383828 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0923 13:25:36.428913 2383828 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-133262" context rescaled to 1 replicas
	I0923 13:25:36.462382 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.627742097s)
	I0923 13:25:37.802089 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:38.409822 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.428662395s)
	I0923 13:25:38.409900 2383828 addons.go:475] Verifying addon ingress=true in "addons-133262"
	I0923 13:25:38.410127 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.345169486s)
	I0923 13:25:38.410241 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.317116518s)
	I0923 13:25:38.410368 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.293369763s)
	I0923 13:25:38.410583 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.282860168s)
	I0923 13:25:38.410697 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.057460055s)
	I0923 13:25:38.410709 2383828 addons.go:475] Verifying addon registry=true in "addons-133262"
	I0923 13:25:38.410817 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.313378103s)
	I0923 13:25:38.410987 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.827495446s)
	I0923 13:25:38.411193 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.758634316s)
	W0923 13:25:38.412175 2383828 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 13:25:38.412202 2383828 retry.go:31] will retry after 192.996519ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 13:25:38.411249 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.720791712s)
	I0923 13:25:38.412240 2383828 addons.go:475] Verifying addon metrics-server=true in "addons-133262"
	I0923 13:25:38.411301 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.42762092s)
	I0923 13:25:38.413014 2383828 out.go:177] * Verifying ingress addon...
	I0923 13:25:38.413041 2383828 out.go:177] * Verifying registry addon...
	I0923 13:25:38.414885 2383828 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-133262 service yakd-dashboard -n yakd-dashboard
	
	I0923 13:25:38.419102 2383828 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 13:25:38.419832 2383828 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0923 13:25:38.460388 2383828 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0923 13:25:38.463085 2383828 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 13:25:38.463119 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:38.463335 2383828 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 13:25:38.463348 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:38.606142 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 13:25:39.005138 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:39.021026 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:39.118283 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.966973654s)
	I0923 13:25:39.118406 2383828 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-133262"
	I0923 13:25:39.121268 2383828 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 13:25:39.124770 2383828 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 13:25:39.156704 2383828 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 13:25:39.156770 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:39.439937 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:39.444350 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:39.640632 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:39.925058 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:39.925531 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:39.971775 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.365584799s)
	I0923 13:25:40.129039 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:40.286956 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:40.425951 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:40.427311 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:40.630301 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:40.924856 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:40.925869 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:41.129822 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:41.425406 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:41.425833 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:41.629752 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:41.926255 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:41.927436 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:42.132576 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:42.424685 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:42.424871 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:42.635312 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:42.637181 2383828 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 13:25:42.637349 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:42.660570 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:42.775194 2383828 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 13:25:42.787736 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:42.799009 2383828 addons.go:234] Setting addon gcp-auth=true in "addons-133262"
	I0923 13:25:42.799068 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:42.799666 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:42.819110 2383828 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 13:25:42.819169 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:42.837017 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:42.928031 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:42.928785 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:42.943532 2383828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 13:25:42.946272 2383828 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 13:25:42.948939 2383828 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 13:25:42.948964 2383828 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 13:25:42.967771 2383828 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 13:25:42.967799 2383828 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 13:25:42.986757 2383828 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 13:25:42.986781 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 13:25:43.007805 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 13:25:43.133188 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:43.440942 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:43.448247 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:43.598168 2383828 addons.go:475] Verifying addon gcp-auth=true in "addons-133262"
	I0923 13:25:43.600804 2383828 out.go:177] * Verifying gcp-auth addon...
	I0923 13:25:43.604541 2383828 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 13:25:43.614384 2383828 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 13:25:43.614417 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:43.714958 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:43.927260 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:43.928296 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:44.108166 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:44.129766 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:44.425907 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:44.428947 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:44.608989 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:44.629165 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:44.924442 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:44.924911 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:45.109621 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:45.134831 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:45.286174 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:45.423899 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:45.424213 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:45.608699 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:45.630848 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:45.923717 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:45.924806 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:46.108554 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:46.134002 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:46.423457 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:46.423949 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:46.607763 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:46.628666 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:46.923946 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:46.924341 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:47.108334 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:47.128936 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:47.424042 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:47.425089 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:47.608593 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:47.628453 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:47.786101 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:47.924546 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:47.925407 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:48.107567 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:48.129020 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:48.424760 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:48.425682 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:48.607946 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:48.629119 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:48.923346 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:48.924113 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:49.107465 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:49.128820 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:49.423331 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:49.424397 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:49.609143 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:49.628320 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:49.786566 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:49.924514 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:49.924812 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:50.108212 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:50.128656 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:50.423917 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:50.426088 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:50.607776 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:50.627970 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:50.923145 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:50.923993 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:51.108698 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:51.129331 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:51.424158 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:51.424921 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:51.607952 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:51.628227 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:51.923369 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:51.924228 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:52.107969 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:52.129521 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:52.286509 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:52.424000 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:52.424964 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:52.608383 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:52.628653 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:52.924655 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:52.925393 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:53.108542 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:53.129828 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:53.424003 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:53.424995 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:53.608550 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:53.629037 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:53.923760 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:53.924375 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:54.108444 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:54.128575 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:54.424136 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:54.424452 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:54.608327 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:54.628015 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:54.786192 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:54.924376 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:54.925390 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:55.108016 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:55.129009 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:55.424425 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:55.424771 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:55.608044 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:55.628274 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:55.924611 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:55.925486 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:56.108074 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:56.128941 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:56.423602 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:56.424012 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:56.607850 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:56.628723 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:56.786783 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:56.923439 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:56.924812 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:57.108314 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:57.128666 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:57.423521 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:57.424105 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:57.607781 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:57.628448 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:57.925107 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:57.926033 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:58.108563 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:58.128779 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:58.423804 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:58.424364 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:58.607922 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:58.628710 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:58.923609 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:58.924488 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:59.107804 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:59.128570 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:59.285918 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:59.424134 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:59.424388 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:59.607622 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:59.628423 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:59.923463 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:59.925039 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:00.109338 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:00.130159 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:00.423701 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:00.424594 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:00.607942 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:00.629162 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:00.924187 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:00.924521 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:01.114269 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:01.132902 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:01.286067 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:01.424165 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:01.424989 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:01.608584 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:01.628626 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:01.924141 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:01.925153 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:02.109683 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:02.129574 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:02.425376 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:02.427118 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:02.608407 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:02.628353 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:02.927784 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:02.929710 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:03.108803 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:03.128259 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:03.286985 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:03.423996 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:03.425124 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:03.607627 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:03.628770 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:03.924209 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:03.925264 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:04.107519 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:04.128969 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:04.424199 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:04.425254 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:04.607825 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:04.628956 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:04.923561 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:04.924425 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:05.108488 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:05.129008 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:05.422992 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:05.424323 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:05.607517 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:05.629014 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:05.786404 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:05.924275 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:05.925412 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:06.108780 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:06.127963 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:06.423656 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:06.424949 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:06.608371 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:06.628604 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:06.924164 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:06.924953 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:07.108615 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:07.129007 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:07.424171 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:07.424999 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:07.608886 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:07.628756 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:07.786544 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:07.926683 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:07.929614 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:08.108197 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:08.129087 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:08.423866 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:08.424407 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:08.608426 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:08.628441 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:08.924009 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:08.924680 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:09.108487 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:09.128421 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:09.423250 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:09.424626 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:09.607764 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:09.628296 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:09.923752 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:09.924369 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:10.108282 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:10.128775 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:10.286079 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:10.423330 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:10.424419 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:10.608417 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:10.628309 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:10.924180 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:10.925450 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:11.107825 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:11.128085 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:11.423818 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:11.424888 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:11.607701 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:11.628640 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:11.923640 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:11.924245 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:12.108061 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:12.129105 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:12.287506 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:12.424922 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:12.425373 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:12.608172 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:12.628478 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:12.924499 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:12.925567 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:13.107976 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:13.128509 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:13.425009 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:13.425224 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:13.608323 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:13.628636 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:13.923641 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:13.924617 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:14.107745 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:14.127838 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:14.423884 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:14.424011 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:14.608057 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:14.628329 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:14.785799 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:14.924654 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:14.925939 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:15.109613 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:15.128760 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:15.424208 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:15.425109 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:15.608425 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:15.628445 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:15.923911 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:15.925392 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:16.108115 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:16.130852 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:16.424605 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:16.425002 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:16.619021 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:16.636992 2383828 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 13:26:16.637020 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:16.805123 2383828 node_ready.go:49] node "addons-133262" has status "Ready":"True"
	I0923 13:26:16.805149 2383828 node_ready.go:38] duration metric: took 41.022536428s for node "addons-133262" to be "Ready" ...
	I0923 13:26:16.805159 2383828 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:26:16.913885 2383828 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r5mdg" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:16.951549 2383828 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 13:26:16.951577 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:16.952438 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:17.127438 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:17.160927 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:17.432503 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:17.433603 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:17.608480 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:17.630006 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:17.925104 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:17.926484 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:18.107966 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:18.129404 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:18.421302 2383828 pod_ready.go:93] pod "coredns-7c65d6cfc9-r5mdg" in "kube-system" namespace has status "Ready":"True"
	I0923 13:26:18.421379 2383828 pod_ready.go:82] duration metric: took 1.507456205s for pod "coredns-7c65d6cfc9-r5mdg" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.421409 2383828 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.425730 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:18.427429 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:18.429046 2383828 pod_ready.go:93] pod "etcd-addons-133262" in "kube-system" namespace has status "Ready":"True"
	I0923 13:26:18.429069 2383828 pod_ready.go:82] duration metric: took 7.651873ms for pod "etcd-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.429084 2383828 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.434109 2383828 pod_ready.go:93] pod "kube-apiserver-addons-133262" in "kube-system" namespace has status "Ready":"True"
	I0923 13:26:18.434138 2383828 pod_ready.go:82] duration metric: took 5.046437ms for pod "kube-apiserver-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.434150 2383828 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.439598 2383828 pod_ready.go:93] pod "kube-controller-manager-addons-133262" in "kube-system" namespace has status "Ready":"True"
	I0923 13:26:18.439681 2383828 pod_ready.go:82] duration metric: took 5.521536ms for pod "kube-controller-manager-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.439712 2383828 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qsbr8" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.448014 2383828 pod_ready.go:93] pod "kube-proxy-qsbr8" in "kube-system" namespace has status "Ready":"True"
	I0923 13:26:18.448041 2383828 pod_ready.go:82] duration metric: took 8.31315ms for pod "kube-proxy-qsbr8" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.448052 2383828 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.608120 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:18.629275 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:18.819692 2383828 pod_ready.go:93] pod "kube-scheduler-addons-133262" in "kube-system" namespace has status "Ready":"True"
	I0923 13:26:18.819716 2383828 pod_ready.go:82] duration metric: took 371.655421ms for pod "kube-scheduler-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.819728 2383828 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.925018 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:18.926638 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:19.108614 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:19.129912 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:19.426498 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:19.434643 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:19.609140 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:19.630745 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:19.926844 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:19.927339 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:20.114093 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:20.130462 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:20.425302 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:20.425636 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:20.609266 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:20.630594 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:20.827914 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:20.927587 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:20.929214 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:21.108706 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:21.132032 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:21.424631 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:21.425874 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:21.609147 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:21.630433 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:21.925792 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:21.928179 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:22.108622 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:22.129868 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:22.427188 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:22.428634 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:22.609061 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:22.630978 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:22.927405 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:22.928806 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:23.107580 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:23.130630 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:23.335949 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:23.426013 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:23.427232 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:23.610362 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:23.631331 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:23.927726 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:23.929185 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:24.108527 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:24.130795 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:24.425075 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:24.426215 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:24.608374 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:24.629451 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:24.928086 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:24.931538 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:25.111794 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:25.131785 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:25.426856 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:25.427580 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:25.608708 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:25.630668 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:25.825870 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:25.928651 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:25.929663 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:26.108641 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:26.131563 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:26.427286 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:26.427960 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:26.608115 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:26.633744 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:26.926459 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:26.927726 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:27.109104 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:27.130705 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:27.427053 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:27.427397 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:27.624099 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:27.630018 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:27.828338 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:27.928155 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:27.929683 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:28.110349 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:28.143960 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:28.433172 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:28.435357 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:28.609573 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:28.630892 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:28.925681 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:28.926271 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:29.108330 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:29.129303 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:29.424904 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:29.425741 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:29.608510 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:29.710465 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:29.923972 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:29.925106 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:30.108770 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:30.130201 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:30.326027 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:30.426261 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:30.427582 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:30.608276 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:30.630718 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:30.924344 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:30.926833 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:31.108072 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:31.130159 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:31.427336 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:31.428549 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:31.608286 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:31.710749 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:31.924625 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:31.925735 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:32.108128 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:32.129672 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:32.424870 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:32.425427 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:32.608995 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:32.630396 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:32.826025 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:32.925488 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:32.927087 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:33.111899 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:33.131694 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:33.426016 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:33.427559 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:33.609572 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:33.630371 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:33.924532 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:33.925748 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:34.107968 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:34.129639 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:34.424332 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:34.425344 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:34.608778 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:34.630277 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:34.826657 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:34.925321 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:34.926268 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:35.108611 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:35.129999 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:35.424498 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:35.425426 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:35.608167 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:35.629677 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:35.938811 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:35.939969 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:36.109842 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:36.130915 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:36.424698 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:36.426376 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:36.612327 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:36.631843 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:36.827050 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:36.930622 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:36.932379 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:37.108391 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:37.130797 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:37.427855 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:37.429065 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:37.609000 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:37.631251 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:37.927282 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:37.928823 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:38.108882 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:38.130959 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:38.428793 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:38.430557 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:38.609116 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:38.631836 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:38.924409 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:38.924683 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:39.107807 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:39.130613 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:39.326392 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:39.424626 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:39.425794 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:39.607871 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:39.629703 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:39.925592 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:39.925659 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:40.107840 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:40.129321 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:40.425352 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:40.425960 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:40.614276 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:40.629941 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:40.925044 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:40.926229 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:41.108593 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:41.130189 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:41.426729 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:41.427709 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:41.608965 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:41.630359 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:41.826920 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:41.924926 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:41.925551 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:42.109578 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:42.133108 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:42.425407 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:42.427740 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:42.608654 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:42.630826 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:42.931020 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:42.937353 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:43.108583 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:43.132574 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:43.424400 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:43.425098 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:43.609963 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:43.629537 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:43.924682 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:43.926264 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:44.110084 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:44.130069 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:44.325695 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:44.424265 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:44.425922 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:44.608504 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:44.629877 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:44.924522 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:44.925695 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:45.110242 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:45.130936 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:45.424945 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:45.425411 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:45.608367 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:45.630543 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:45.925795 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:45.928543 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:46.109089 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:46.130246 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:46.326623 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:46.426401 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:46.427938 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:46.608698 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:46.631073 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:46.927536 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:46.929324 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:47.109064 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:47.130949 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:47.426667 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:47.427618 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:47.608470 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:47.630383 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:47.928101 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:47.929423 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:48.108252 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:48.131080 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:48.332183 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:48.425596 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:48.426896 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:48.610248 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:48.630206 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:48.925882 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:48.927170 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:49.108773 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:49.129674 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:49.433796 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:49.434273 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:49.608250 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:49.629496 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:49.924209 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:49.927394 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:50.112393 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:50.141056 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:50.426432 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:50.427848 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:50.609193 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:50.629564 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:50.826654 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:50.925277 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:50.925459 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:51.109502 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:51.129979 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:51.424931 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:51.426827 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:51.607779 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:51.630914 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:51.925515 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:51.926128 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:52.107821 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:52.129416 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:52.426982 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:52.428045 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:52.609048 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:52.635441 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:52.829448 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:52.927255 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:52.928637 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:53.114000 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:53.135575 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:53.425124 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:53.426490 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:53.608173 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:53.632288 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:53.924879 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:53.925839 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:54.108431 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:54.130064 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:54.423850 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:54.424803 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:54.608858 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:54.631272 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:54.925937 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:54.927386 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:55.114505 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:55.137604 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:55.336635 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:55.425715 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:55.427081 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:55.608839 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:55.632770 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:55.925063 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:55.925569 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:56.115411 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:56.131630 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:56.425028 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:56.426021 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:56.608664 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:56.629866 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:56.926440 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:56.926859 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:57.108467 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:57.130256 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:57.425565 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:57.426881 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:57.609766 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:57.631522 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:57.848276 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:57.925589 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:57.926613 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:58.108061 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:58.130231 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:58.428638 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:58.430028 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:58.610101 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:58.630423 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:58.939227 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:58.940370 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:59.108063 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:59.129831 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:59.424864 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:59.425049 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:59.608521 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:59.629400 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:59.924929 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:59.925607 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:00.109319 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:00.131385 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:00.326741 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:00.425134 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:00.425736 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:00.608187 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:00.630261 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:00.924611 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:00.925609 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:01.108482 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:01.131704 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:01.430029 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:01.434957 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:01.607779 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:01.630225 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:01.942371 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:01.943654 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:02.108432 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:02.130860 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:02.424998 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:02.426036 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:02.608703 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:02.631070 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:02.826705 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:02.940137 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:02.940713 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:03.108948 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:03.129909 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:03.425654 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:03.428546 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:03.608123 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:03.630220 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:03.929094 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:03.929953 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:04.108375 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:04.130124 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:04.425986 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:04.428220 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:04.609126 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:04.632554 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:04.828070 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:04.924862 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:04.926426 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:05.108702 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:05.130029 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:05.430290 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:05.432601 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:05.609965 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:05.629241 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:05.965785 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:05.986261 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:06.113906 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:06.221296 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:06.426093 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:06.426830 2383828 kapi.go:107] duration metric: took 1m28.007722418s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 13:27:06.609440 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:06.630984 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:06.828181 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:06.925007 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:07.108169 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:07.130553 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:07.429446 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:07.610515 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:07.631178 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:07.928566 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:08.153119 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:08.155484 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:08.425404 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:08.608582 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:08.631061 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:08.924414 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:09.108227 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:09.132719 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:09.326297 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:09.426358 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:09.608437 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:09.630725 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:09.925223 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:10.109249 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:10.132124 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:10.425940 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:10.608143 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:10.629578 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:10.938523 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:11.109170 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:11.130262 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:11.427987 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:11.610666 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:11.635149 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:11.825783 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:11.924894 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:12.110369 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:12.130346 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:12.424588 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:12.607769 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:12.629949 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:12.930645 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:13.108546 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:13.135919 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:13.426445 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:13.608884 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:13.630692 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:13.827763 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:13.925431 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:14.109183 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:14.129742 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:14.424960 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:14.608136 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:14.630105 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:14.924059 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:15.110266 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:15.130293 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:15.429250 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:15.609425 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:15.630519 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:15.925153 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:16.108867 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:16.130191 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:16.326153 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:16.424176 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:16.608289 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:16.629486 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:16.924711 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:17.108323 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:17.129255 2383828 kapi.go:107] duration metric: took 1m38.00448827s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 13:27:17.424604 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:17.607610 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:17.924643 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:18.108043 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:18.326219 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:18.424275 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:18.608499 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:18.924779 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:19.108343 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:19.424411 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:19.607534 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:19.925719 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:20.107995 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:20.326285 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:20.424926 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:20.608395 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:20.925436 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:21.108021 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:21.424136 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:21.608172 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:21.925823 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:22.109384 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:22.329093 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:22.425194 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:22.608312 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:22.924266 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:23.108430 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:23.425129 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:23.608158 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:23.925678 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:24.108712 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:24.424294 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:24.608749 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:24.830907 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:24.927893 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:25.115382 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:25.425227 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:25.608049 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:25.925661 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:26.108570 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:26.424563 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:26.608660 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:26.839697 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:26.926678 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:27.109456 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:27.427209 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:27.608835 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:27.924668 2383828 kapi.go:107] duration metric: took 1m49.504828577s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 13:27:28.108170 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:28.608618 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:29.109389 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:29.328626 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:29.609997 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:30.109077 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:30.609336 2383828 kapi.go:107] duration metric: took 1m47.004794044s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 13:27:30.611924 2383828 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-133262 cluster.
	I0923 13:27:30.614489 2383828 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 13:27:30.617196 2383828 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 13:27:30.620413 2383828 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0923 13:27:30.622924 2383828 addons.go:510] duration metric: took 1m58.634040955s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0923 13:27:31.825787 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:34.326055 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:36.326433 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:36.827741 2383828 pod_ready.go:93] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"True"
	I0923 13:27:36.827771 2383828 pod_ready.go:82] duration metric: took 1m18.008034234s for pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace to be "Ready" ...
	I0923 13:27:36.827784 2383828 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4m26g" in "kube-system" namespace to be "Ready" ...
	I0923 13:27:36.834630 2383828 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-4m26g" in "kube-system" namespace has status "Ready":"True"
	I0923 13:27:36.834660 2383828 pod_ready.go:82] duration metric: took 6.867982ms for pod "nvidia-device-plugin-daemonset-4m26g" in "kube-system" namespace to be "Ready" ...
	I0923 13:27:36.834682 2383828 pod_ready.go:39] duration metric: took 1m20.029511263s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:27:36.834698 2383828 api_server.go:52] waiting for apiserver process to appear ...
	I0923 13:27:36.834732 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 13:27:36.834794 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 13:27:36.888124 2383828 cri.go:89] found id: "9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23"
	I0923 13:27:36.888148 2383828 cri.go:89] found id: ""
	I0923 13:27:36.888156 2383828 logs.go:276] 1 containers: [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23]
	I0923 13:27:36.888219 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:36.893253 2383828 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 13:27:36.893387 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 13:27:36.933867 2383828 cri.go:89] found id: "227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7"
	I0923 13:27:36.933890 2383828 cri.go:89] found id: ""
	I0923 13:27:36.933898 2383828 logs.go:276] 1 containers: [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7]
	I0923 13:27:36.933953 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:36.937393 2383828 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 13:27:36.937521 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 13:27:36.975388 2383828 cri.go:89] found id: "62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6"
	I0923 13:27:36.975410 2383828 cri.go:89] found id: ""
	I0923 13:27:36.975418 2383828 logs.go:276] 1 containers: [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6]
	I0923 13:27:36.975488 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:36.978917 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 13:27:36.978992 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 13:27:37.026940 2383828 cri.go:89] found id: "1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09"
	I0923 13:27:37.026968 2383828 cri.go:89] found id: ""
	I0923 13:27:37.026976 2383828 logs.go:276] 1 containers: [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09]
	I0923 13:27:37.027036 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:37.031174 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 13:27:37.031273 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 13:27:37.088807 2383828 cri.go:89] found id: "6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d"
	I0923 13:27:37.088831 2383828 cri.go:89] found id: ""
	I0923 13:27:37.088838 2383828 logs.go:276] 1 containers: [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d]
	I0923 13:27:37.088896 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:37.092489 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 13:27:37.092589 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 13:27:37.130778 2383828 cri.go:89] found id: "3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a"
	I0923 13:27:37.130803 2383828 cri.go:89] found id: ""
	I0923 13:27:37.130810 2383828 logs.go:276] 1 containers: [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a]
	I0923 13:27:37.130892 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:37.134501 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 13:27:37.134578 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 13:27:37.173172 2383828 cri.go:89] found id: "de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78"
	I0923 13:27:37.173194 2383828 cri.go:89] found id: ""
	I0923 13:27:37.173202 2383828 logs.go:276] 1 containers: [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78]
	I0923 13:27:37.173269 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:37.177038 2383828 logs.go:123] Gathering logs for dmesg ...
	I0923 13:27:37.177064 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 13:27:37.199500 2383828 logs.go:123] Gathering logs for kube-apiserver [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23] ...
	I0923 13:27:37.199538 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23"
	I0923 13:27:37.265609 2383828 logs.go:123] Gathering logs for coredns [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6] ...
	I0923 13:27:37.265654 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6"
	I0923 13:27:37.308188 2383828 logs.go:123] Gathering logs for kube-scheduler [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09] ...
	I0923 13:27:37.308222 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09"
	I0923 13:27:37.364448 2383828 logs.go:123] Gathering logs for kube-proxy [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d] ...
	I0923 13:27:37.364484 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d"
	I0923 13:27:37.407944 2383828 logs.go:123] Gathering logs for CRI-O ...
	I0923 13:27:37.407976 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 13:27:37.503765 2383828 logs.go:123] Gathering logs for kubelet ...
	I0923 13:27:37.503806 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 13:27:37.536529 2383828 logs.go:138] Found kubelet problem: Sep 23 13:26:16 addons-133262 kubelet[1502]: W0923 13:26:16.549217    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-133262" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-133262' and this object
	W0923 13:27:37.536775 2383828 logs.go:138] Found kubelet problem: Sep 23 13:26:16 addons-133262 kubelet[1502]: E0923 13:26:16.549260    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-133262\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-133262' and this object" logger="UnhandledError"
	I0923 13:27:37.596083 2383828 logs.go:123] Gathering logs for describe nodes ...
	I0923 13:27:37.596124 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 13:27:37.773537 2383828 logs.go:123] Gathering logs for etcd [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7] ...
	I0923 13:27:37.773566 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7"
	I0923 13:27:37.829819 2383828 logs.go:123] Gathering logs for kube-controller-manager [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a] ...
	I0923 13:27:37.829851 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a"
	I0923 13:27:37.903553 2383828 logs.go:123] Gathering logs for kindnet [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78] ...
	I0923 13:27:37.903589 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78"
	I0923 13:27:37.949912 2383828 logs.go:123] Gathering logs for container status ...
	I0923 13:27:37.949945 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 13:27:38.018475 2383828 out.go:358] Setting ErrFile to fd 2...
	I0923 13:27:38.018552 2383828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 13:27:38.018621 2383828 out.go:270] X Problems detected in kubelet:
	W0923 13:27:38.018634 2383828 out.go:270]   Sep 23 13:26:16 addons-133262 kubelet[1502]: W0923 13:26:16.549217    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-133262" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-133262' and this object
	W0923 13:27:38.018644 2383828 out.go:270]   Sep 23 13:26:16 addons-133262 kubelet[1502]: E0923 13:26:16.549260    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-133262\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-133262' and this object" logger="UnhandledError"
	I0923 13:27:38.018658 2383828 out.go:358] Setting ErrFile to fd 2...
	I0923 13:27:38.018665 2383828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:27:48.019881 2383828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:27:48.035316 2383828 api_server.go:72] duration metric: took 2m16.046841632s to wait for apiserver process to appear ...
	I0923 13:27:48.035344 2383828 api_server.go:88] waiting for apiserver healthz status ...
	I0923 13:27:48.035384 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 13:27:48.035446 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 13:27:48.085240 2383828 cri.go:89] found id: "9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23"
	I0923 13:27:48.085263 2383828 cri.go:89] found id: ""
	I0923 13:27:48.085271 2383828 logs.go:276] 1 containers: [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23]
	I0923 13:27:48.085332 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.089041 2383828 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 13:27:48.089114 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 13:27:48.127126 2383828 cri.go:89] found id: "227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7"
	I0923 13:27:48.127146 2383828 cri.go:89] found id: ""
	I0923 13:27:48.127154 2383828 logs.go:276] 1 containers: [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7]
	I0923 13:27:48.127220 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.130855 2383828 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 13:27:48.130931 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 13:27:48.169933 2383828 cri.go:89] found id: "62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6"
	I0923 13:27:48.169956 2383828 cri.go:89] found id: ""
	I0923 13:27:48.169964 2383828 logs.go:276] 1 containers: [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6]
	I0923 13:27:48.170017 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.173593 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 13:27:48.173666 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 13:27:48.217851 2383828 cri.go:89] found id: "1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09"
	I0923 13:27:48.217875 2383828 cri.go:89] found id: ""
	I0923 13:27:48.217920 2383828 logs.go:276] 1 containers: [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09]
	I0923 13:27:48.217983 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.221539 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 13:27:48.221608 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 13:27:48.260958 2383828 cri.go:89] found id: "6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d"
	I0923 13:27:48.260982 2383828 cri.go:89] found id: ""
	I0923 13:27:48.260990 2383828 logs.go:276] 1 containers: [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d]
	I0923 13:27:48.261047 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.264814 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 13:27:48.264887 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 13:27:48.303207 2383828 cri.go:89] found id: "3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a"
	I0923 13:27:48.303227 2383828 cri.go:89] found id: ""
	I0923 13:27:48.303234 2383828 logs.go:276] 1 containers: [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a]
	I0923 13:27:48.303290 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.307190 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 13:27:48.307311 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 13:27:48.345328 2383828 cri.go:89] found id: "de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78"
	I0923 13:27:48.345353 2383828 cri.go:89] found id: ""
	I0923 13:27:48.345361 2383828 logs.go:276] 1 containers: [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78]
	I0923 13:27:48.345415 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.349052 2383828 logs.go:123] Gathering logs for CRI-O ...
	I0923 13:27:48.349077 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 13:27:48.440481 2383828 logs.go:123] Gathering logs for kubelet ...
	I0923 13:27:48.440519 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 13:27:48.471627 2383828 logs.go:138] Found kubelet problem: Sep 23 13:26:16 addons-133262 kubelet[1502]: W0923 13:26:16.549217    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-133262" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-133262' and this object
	W0923 13:27:48.471961 2383828 logs.go:138] Found kubelet problem: Sep 23 13:26:16 addons-133262 kubelet[1502]: E0923 13:26:16.549260    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-133262\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-133262' and this object" logger="UnhandledError"
	I0923 13:27:48.532975 2383828 logs.go:123] Gathering logs for describe nodes ...
	I0923 13:27:48.533015 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 13:27:48.676516 2383828 logs.go:123] Gathering logs for kube-apiserver [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23] ...
	I0923 13:27:48.676551 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23"
	I0923 13:27:48.743456 2383828 logs.go:123] Gathering logs for etcd [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7] ...
	I0923 13:27:48.743491 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7"
	I0923 13:27:48.801610 2383828 logs.go:123] Gathering logs for coredns [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6] ...
	I0923 13:27:48.801645 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6"
	I0923 13:27:48.844944 2383828 logs.go:123] Gathering logs for kube-scheduler [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09] ...
	I0923 13:27:48.844975 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09"
	I0923 13:27:48.892863 2383828 logs.go:123] Gathering logs for kube-controller-manager [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a] ...
	I0923 13:27:48.892898 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a"
	I0923 13:27:48.965213 2383828 logs.go:123] Gathering logs for dmesg ...
	I0923 13:27:48.965246 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 13:27:48.982076 2383828 logs.go:123] Gathering logs for kube-proxy [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d] ...
	I0923 13:27:48.982107 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d"
	I0923 13:27:49.032446 2383828 logs.go:123] Gathering logs for kindnet [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78] ...
	I0923 13:27:49.032476 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78"
	I0923 13:27:49.081688 2383828 logs.go:123] Gathering logs for container status ...
	I0923 13:27:49.081717 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 13:27:49.140973 2383828 out.go:358] Setting ErrFile to fd 2...
	I0923 13:27:49.141006 2383828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 13:27:49.141069 2383828 out.go:270] X Problems detected in kubelet:
	W0923 13:27:49.141087 2383828 out.go:270]   Sep 23 13:26:16 addons-133262 kubelet[1502]: W0923 13:26:16.549217    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-133262" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-133262' and this object
	W0923 13:27:49.141102 2383828 out.go:270]   Sep 23 13:26:16 addons-133262 kubelet[1502]: E0923 13:26:16.549260    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-133262\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-133262' and this object" logger="UnhandledError"
	I0923 13:27:49.141110 2383828 out.go:358] Setting ErrFile to fd 2...
	I0923 13:27:49.141123 2383828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:27:59.141822 2383828 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:27:59.149585 2383828 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0923 13:27:59.150578 2383828 api_server.go:141] control plane version: v1.31.1
	I0923 13:27:59.150608 2383828 api_server.go:131] duration metric: took 11.115252928s to wait for apiserver health ...
	I0923 13:27:59.150617 2383828 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 13:27:59.150645 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 13:27:59.150719 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 13:27:59.197911 2383828 cri.go:89] found id: "9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23"
	I0923 13:27:59.197932 2383828 cri.go:89] found id: ""
	I0923 13:27:59.197941 2383828 logs.go:276] 1 containers: [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23]
	I0923 13:27:59.197995 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.201940 2383828 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 13:27:59.202006 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 13:27:59.238531 2383828 cri.go:89] found id: "227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7"
	I0923 13:27:59.238551 2383828 cri.go:89] found id: ""
	I0923 13:27:59.238559 2383828 logs.go:276] 1 containers: [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7]
	I0923 13:27:59.238611 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.242085 2383828 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 13:27:59.242204 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 13:27:59.280989 2383828 cri.go:89] found id: "62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6"
	I0923 13:27:59.281010 2383828 cri.go:89] found id: ""
	I0923 13:27:59.281017 2383828 logs.go:276] 1 containers: [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6]
	I0923 13:27:59.281074 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.284557 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 13:27:59.284637 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 13:27:59.324082 2383828 cri.go:89] found id: "1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09"
	I0923 13:27:59.324103 2383828 cri.go:89] found id: ""
	I0923 13:27:59.324111 2383828 logs.go:276] 1 containers: [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09]
	I0923 13:27:59.324165 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.327636 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 13:27:59.327740 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 13:27:59.365535 2383828 cri.go:89] found id: "6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d"
	I0923 13:27:59.365562 2383828 cri.go:89] found id: ""
	I0923 13:27:59.365572 2383828 logs.go:276] 1 containers: [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d]
	I0923 13:27:59.365643 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.369260 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 13:27:59.369333 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 13:27:59.406889 2383828 cri.go:89] found id: "3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a"
	I0923 13:27:59.406956 2383828 cri.go:89] found id: ""
	I0923 13:27:59.406971 2383828 logs.go:276] 1 containers: [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a]
	I0923 13:27:59.407044 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.410404 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 13:27:59.410504 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 13:27:59.464101 2383828 cri.go:89] found id: "de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78"
	I0923 13:27:59.464123 2383828 cri.go:89] found id: ""
	I0923 13:27:59.464130 2383828 logs.go:276] 1 containers: [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78]
	I0923 13:27:59.464210 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.467715 2383828 logs.go:123] Gathering logs for dmesg ...
	I0923 13:27:59.467741 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 13:27:59.484127 2383828 logs.go:123] Gathering logs for kube-apiserver [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23] ...
	I0923 13:27:59.484159 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23"
	I0923 13:27:59.535894 2383828 logs.go:123] Gathering logs for kube-proxy [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d] ...
	I0923 13:27:59.535971 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d"
	I0923 13:27:59.581931 2383828 logs.go:123] Gathering logs for container status ...
	I0923 13:27:59.581956 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 13:27:59.630190 2383828 logs.go:123] Gathering logs for kube-controller-manager [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a] ...
	I0923 13:27:59.630220 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a"
	I0923 13:27:59.697374 2383828 logs.go:123] Gathering logs for kindnet [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78] ...
	I0923 13:27:59.697409 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78"
	I0923 13:27:59.735991 2383828 logs.go:123] Gathering logs for CRI-O ...
	I0923 13:27:59.736021 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 13:27:59.826571 2383828 logs.go:123] Gathering logs for kubelet ...
	I0923 13:27:59.826656 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 13:27:59.899998 2383828 logs.go:123] Gathering logs for describe nodes ...
	I0923 13:27:59.900035 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 13:28:00.099569 2383828 logs.go:123] Gathering logs for etcd [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7] ...
	I0923 13:28:00.099607 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7"
	I0923 13:28:00.174513 2383828 logs.go:123] Gathering logs for coredns [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6] ...
	I0923 13:28:00.174556 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6"
	I0923 13:28:00.241997 2383828 logs.go:123] Gathering logs for kube-scheduler [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09] ...
	I0923 13:28:00.242034 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09"
	I0923 13:28:02.811205 2383828 system_pods.go:59] 18 kube-system pods found
	I0923 13:28:02.811254 2383828 system_pods.go:61] "coredns-7c65d6cfc9-r5mdg" [244c7077-c0d1-4d2d-92f7-49811a2e7840] Running
	I0923 13:28:02.811262 2383828 system_pods.go:61] "csi-hostpath-attacher-0" [2dfdc637-b058-47a4-8127-066e22a8c844] Running
	I0923 13:28:02.811268 2383828 system_pods.go:61] "csi-hostpath-resizer-0" [bf94dfec-f4ec-4276-8c84-e9d52b353dd1] Running
	I0923 13:28:02.811273 2383828 system_pods.go:61] "csi-hostpathplugin-4l5sb" [4b14671b-9a65-4b4f-9656-1a542720db35] Running
	I0923 13:28:02.811278 2383828 system_pods.go:61] "etcd-addons-133262" [ccd2243d-7923-4bd5-aad1-4bcdf84093b0] Running
	I0923 13:28:02.811282 2383828 system_pods.go:61] "kindnet-j682f" [30af3434-889d-4dfc-933a-a18b65eae56b] Running
	I0923 13:28:02.811286 2383828 system_pods.go:61] "kube-apiserver-addons-133262" [a07b8088-fb80-4c58-9f12-a59ce48acae6] Running
	I0923 13:28:02.811290 2383828 system_pods.go:61] "kube-controller-manager-addons-133262" [402fc2e9-9278-4d3c-ba42-58cf9e6f7256] Running
	I0923 13:28:02.811295 2383828 system_pods.go:61] "kube-ingress-dns-minikube" [f3f96ece-39b2-4aef-afc3-deeac0208c34] Running
	I0923 13:28:02.811299 2383828 system_pods.go:61] "kube-proxy-qsbr8" [352eb868-c25d-49b6-9c55-9960dc2cdf8e] Running
	I0923 13:28:02.811303 2383828 system_pods.go:61] "kube-scheduler-addons-133262" [a1b18f24-3925-4dbd-adbf-b70661d68d91] Running
	I0923 13:28:02.811307 2383828 system_pods.go:61] "metrics-server-84c5f94fbc-dqnhw" [6d7335f6-5dfb-4227-9606-8d8b1b126d40] Running
	I0923 13:28:02.811321 2383828 system_pods.go:61] "nvidia-device-plugin-daemonset-4m26g" [c0e73bf1-5273-4a14-9517-202ce22276b8] Running
	I0923 13:28:02.811325 2383828 system_pods.go:61] "registry-66c9cd494c-2g5d2" [d093e650-6688-49f8-9c46-28a49dd5a974] Running
	I0923 13:28:02.811328 2383828 system_pods.go:61] "registry-proxy-pqtjc" [cb6ceb80-6e9e-4cb0-8229-2ffe7f03b5f8] Running
	I0923 13:28:02.811339 2383828 system_pods.go:61] "snapshot-controller-56fcc65765-5t68w" [15a9f6f7-dd61-455c-be65-26312ab5fa53] Running
	I0923 13:28:02.811343 2383828 system_pods.go:61] "snapshot-controller-56fcc65765-mjwxw" [8d203518-0a49-462e-b208-58bf3d4f9059] Running
	I0923 13:28:02.811346 2383828 system_pods.go:61] "storage-provisioner" [c54ff386-7dac-4422-9ce3-010b14a0da61] Running
	I0923 13:28:02.811353 2383828 system_pods.go:74] duration metric: took 3.660729215s to wait for pod list to return data ...
	I0923 13:28:02.811364 2383828 default_sa.go:34] waiting for default service account to be created ...
	I0923 13:28:02.814522 2383828 default_sa.go:45] found service account: "default"
	I0923 13:28:02.814550 2383828 default_sa.go:55] duration metric: took 3.179207ms for default service account to be created ...
	I0923 13:28:02.814561 2383828 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 13:28:02.824546 2383828 system_pods.go:86] 18 kube-system pods found
	I0923 13:28:02.824586 2383828 system_pods.go:89] "coredns-7c65d6cfc9-r5mdg" [244c7077-c0d1-4d2d-92f7-49811a2e7840] Running
	I0923 13:28:02.824595 2383828 system_pods.go:89] "csi-hostpath-attacher-0" [2dfdc637-b058-47a4-8127-066e22a8c844] Running
	I0923 13:28:02.824600 2383828 system_pods.go:89] "csi-hostpath-resizer-0" [bf94dfec-f4ec-4276-8c84-e9d52b353dd1] Running
	I0923 13:28:02.824627 2383828 system_pods.go:89] "csi-hostpathplugin-4l5sb" [4b14671b-9a65-4b4f-9656-1a542720db35] Running
	I0923 13:28:02.824639 2383828 system_pods.go:89] "etcd-addons-133262" [ccd2243d-7923-4bd5-aad1-4bcdf84093b0] Running
	I0923 13:28:02.824644 2383828 system_pods.go:89] "kindnet-j682f" [30af3434-889d-4dfc-933a-a18b65eae56b] Running
	I0923 13:28:02.824650 2383828 system_pods.go:89] "kube-apiserver-addons-133262" [a07b8088-fb80-4c58-9f12-a59ce48acae6] Running
	I0923 13:28:02.824661 2383828 system_pods.go:89] "kube-controller-manager-addons-133262" [402fc2e9-9278-4d3c-ba42-58cf9e6f7256] Running
	I0923 13:28:02.824666 2383828 system_pods.go:89] "kube-ingress-dns-minikube" [f3f96ece-39b2-4aef-afc3-deeac0208c34] Running
	I0923 13:28:02.824670 2383828 system_pods.go:89] "kube-proxy-qsbr8" [352eb868-c25d-49b6-9c55-9960dc2cdf8e] Running
	I0923 13:28:02.824680 2383828 system_pods.go:89] "kube-scheduler-addons-133262" [a1b18f24-3925-4dbd-adbf-b70661d68d91] Running
	I0923 13:28:02.824685 2383828 system_pods.go:89] "metrics-server-84c5f94fbc-dqnhw" [6d7335f6-5dfb-4227-9606-8d8b1b126d40] Running
	I0923 13:28:02.824707 2383828 system_pods.go:89] "nvidia-device-plugin-daemonset-4m26g" [c0e73bf1-5273-4a14-9517-202ce22276b8] Running
	I0923 13:28:02.824719 2383828 system_pods.go:89] "registry-66c9cd494c-2g5d2" [d093e650-6688-49f8-9c46-28a49dd5a974] Running
	I0923 13:28:02.824724 2383828 system_pods.go:89] "registry-proxy-pqtjc" [cb6ceb80-6e9e-4cb0-8229-2ffe7f03b5f8] Running
	I0923 13:28:02.824744 2383828 system_pods.go:89] "snapshot-controller-56fcc65765-5t68w" [15a9f6f7-dd61-455c-be65-26312ab5fa53] Running
	I0923 13:28:02.824749 2383828 system_pods.go:89] "snapshot-controller-56fcc65765-mjwxw" [8d203518-0a49-462e-b208-58bf3d4f9059] Running
	I0923 13:28:02.824755 2383828 system_pods.go:89] "storage-provisioner" [c54ff386-7dac-4422-9ce3-010b14a0da61] Running
	I0923 13:28:02.824763 2383828 system_pods.go:126] duration metric: took 10.19587ms to wait for k8s-apps to be running ...
	I0923 13:28:02.824776 2383828 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 13:28:02.824845 2383828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:28:02.836586 2383828 system_svc.go:56] duration metric: took 11.795464ms WaitForService to wait for kubelet
	I0923 13:28:02.836625 2383828 kubeadm.go:582] duration metric: took 2m30.848156578s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:28:02.836643 2383828 node_conditions.go:102] verifying NodePressure condition ...
	I0923 13:28:02.840270 2383828 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0923 13:28:02.840307 2383828 node_conditions.go:123] node cpu capacity is 2
	I0923 13:28:02.840319 2383828 node_conditions.go:105] duration metric: took 3.655882ms to run NodePressure ...
	I0923 13:28:02.840330 2383828 start.go:241] waiting for startup goroutines ...
	I0923 13:28:02.840338 2383828 start.go:246] waiting for cluster config update ...
	I0923 13:28:02.840354 2383828 start.go:255] writing updated cluster config ...
	I0923 13:28:02.840649 2383828 ssh_runner.go:195] Run: rm -f paused
	I0923 13:28:03.209187 2383828 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 13:28:03.213065 2383828 out.go:177] * Done! kubectl is now configured to use "addons-133262" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 13:40:58 addons-133262 crio[966]: time="2024-09-23 13:40:58.659464979Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=7a2fddf5-2ce4-4df6-8eee-3ac5003dbd8e name=/runtime.v1.ImageService/ImageStatus
	Sep 23 13:40:58 addons-133262 crio[966]: time="2024-09-23 13:40:58.660252857Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-zvnjf/hello-world-app" id=7ce2fc00-cc79-461e-857c-93147a251d63 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 23 13:40:58 addons-133262 crio[966]: time="2024-09-23 13:40:58.660366019Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 23 13:40:58 addons-133262 crio[966]: time="2024-09-23 13:40:58.681726731Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3ff055c452c9484507b9aef09fe22ebcdc082acc07873caafb2a68c410931a79/merged/etc/passwd: no such file or directory"
	Sep 23 13:40:58 addons-133262 crio[966]: time="2024-09-23 13:40:58.681921557Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3ff055c452c9484507b9aef09fe22ebcdc082acc07873caafb2a68c410931a79/merged/etc/group: no such file or directory"
	Sep 23 13:40:58 addons-133262 crio[966]: time="2024-09-23 13:40:58.727244960Z" level=info msg="Created container 4ea829001ea4f4cd0466e78d63df8159aa81f11d81cdcf21daaa8194d81db24e: default/hello-world-app-55bf9c44b4-zvnjf/hello-world-app" id=7ce2fc00-cc79-461e-857c-93147a251d63 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 23 13:40:58 addons-133262 crio[966]: time="2024-09-23 13:40:58.727853732Z" level=info msg="Starting container: 4ea829001ea4f4cd0466e78d63df8159aa81f11d81cdcf21daaa8194d81db24e" id=6911d63a-87bc-4081-bc67-4047b4a8c69a name=/runtime.v1.RuntimeService/StartContainer
	Sep 23 13:40:58 addons-133262 crio[966]: time="2024-09-23 13:40:58.735971067Z" level=info msg="Started container" PID=9023 containerID=4ea829001ea4f4cd0466e78d63df8159aa81f11d81cdcf21daaa8194d81db24e description=default/hello-world-app-55bf9c44b4-zvnjf/hello-world-app id=6911d63a-87bc-4081-bc67-4047b4a8c69a name=/runtime.v1.RuntimeService/StartContainer sandboxID=7c0ffc4aafe47e23d3deee5568edaa87f905c16349a45a2e4c97def502d500e2
	Sep 23 13:40:59 addons-133262 crio[966]: time="2024-09-23 13:40:59.588975752Z" level=info msg="Removing container: 53e63daea4104ad2d4354f727c8d49cad8c4e6d585fd0ce34d3e2fcfc6a907a9" id=7d15c73a-7600-4d49-ab35-f7189a9e2de8 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 23 13:40:59 addons-133262 crio[966]: time="2024-09-23 13:40:59.606780327Z" level=info msg="Removed container 53e63daea4104ad2d4354f727c8d49cad8c4e6d585fd0ce34d3e2fcfc6a907a9: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=7d15c73a-7600-4d49-ab35-f7189a9e2de8 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 23 13:41:01 addons-133262 crio[966]: time="2024-09-23 13:41:01.421289500Z" level=info msg="Stopping container: 0f16e342a3584ba42fbce6b80690c5642135db641318653f6435f76fc9f8b428 (timeout: 2s)" id=2061a199-84fe-4fe0-899d-516c28783769 name=/runtime.v1.RuntimeService/StopContainer
	Sep 23 13:41:03 addons-133262 crio[966]: time="2024-09-23 13:41:03.427294665Z" level=warning msg="Stopping container 0f16e342a3584ba42fbce6b80690c5642135db641318653f6435f76fc9f8b428 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=2061a199-84fe-4fe0-899d-516c28783769 name=/runtime.v1.RuntimeService/StopContainer
	Sep 23 13:41:03 addons-133262 conmon[5112]: conmon 0f16e342a3584ba42fbc <ninfo>: container 5124 exited with status 137
	Sep 23 13:41:03 addons-133262 crio[966]: time="2024-09-23 13:41:03.565622181Z" level=info msg="Stopped container 0f16e342a3584ba42fbce6b80690c5642135db641318653f6435f76fc9f8b428: ingress-nginx/ingress-nginx-controller-bc57996ff-d2pjp/controller" id=2061a199-84fe-4fe0-899d-516c28783769 name=/runtime.v1.RuntimeService/StopContainer
	Sep 23 13:41:03 addons-133262 crio[966]: time="2024-09-23 13:41:03.566144023Z" level=info msg="Stopping pod sandbox: 53cf4c8305e8c90c562a088e2f2a6c041631e0f098f9ced65152c38d638c955a" id=94cddb30-f7fc-4c28-a88f-9e5c57771d26 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 23 13:41:03 addons-133262 crio[966]: time="2024-09-23 13:41:03.569454939Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-TFFBGBAF2FNBESMB - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-XXN7ZDG5NCGSBBQC - [0:0]\n-X KUBE-HP-XXN7ZDG5NCGSBBQC\n-X KUBE-HP-TFFBGBAF2FNBESMB\nCOMMIT\n"
	Sep 23 13:41:03 addons-133262 crio[966]: time="2024-09-23 13:41:03.583016323Z" level=info msg="Closing host port tcp:80"
	Sep 23 13:41:03 addons-133262 crio[966]: time="2024-09-23 13:41:03.583076424Z" level=info msg="Closing host port tcp:443"
	Sep 23 13:41:03 addons-133262 crio[966]: time="2024-09-23 13:41:03.584806998Z" level=info msg="Host port tcp:80 does not have an open socket"
	Sep 23 13:41:03 addons-133262 crio[966]: time="2024-09-23 13:41:03.584837422Z" level=info msg="Host port tcp:443 does not have an open socket"
	Sep 23 13:41:03 addons-133262 crio[966]: time="2024-09-23 13:41:03.585013073Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-d2pjp Namespace:ingress-nginx ID:53cf4c8305e8c90c562a088e2f2a6c041631e0f098f9ced65152c38d638c955a UID:7d45db68-99b0-41cf-a495-d22b22b643fb NetNS:/var/run/netns/80ce6146-b9ed-4770-a202-d6e216751216 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 23 13:41:03 addons-133262 crio[966]: time="2024-09-23 13:41:03.585172289Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-d2pjp from CNI network \"kindnet\" (type=ptp)"
	Sep 23 13:41:03 addons-133262 crio[966]: time="2024-09-23 13:41:03.610546612Z" level=info msg="Stopped pod sandbox: 53cf4c8305e8c90c562a088e2f2a6c041631e0f098f9ced65152c38d638c955a" id=94cddb30-f7fc-4c28-a88f-9e5c57771d26 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 23 13:41:04 addons-133262 crio[966]: time="2024-09-23 13:41:04.605563368Z" level=info msg="Removing container: 0f16e342a3584ba42fbce6b80690c5642135db641318653f6435f76fc9f8b428" id=bb8b9bc3-cfc4-4d6d-a143-add0ecbe1732 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 23 13:41:04 addons-133262 crio[966]: time="2024-09-23 13:41:04.621116382Z" level=info msg="Removed container 0f16e342a3584ba42fbce6b80690c5642135db641318653f6435f76fc9f8b428: ingress-nginx/ingress-nginx-controller-bc57996ff-d2pjp/controller" id=bb8b9bc3-cfc4-4d6d-a143-add0ecbe1732 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4ea829001ea4f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        10 seconds ago      Running             hello-world-app           0                   7c0ffc4aafe47       hello-world-app-55bf9c44b4-zvnjf
	1ce6fef620d09       docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53                              2 minutes ago       Running             nginx                     0                   e467bc93e7432       nginx
	334680bd78e33       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 13 minutes ago      Running             gcp-auth                  0                   2c1d4aa6e8775       gcp-auth-89d5ffd79-sn4tn
	1cc785331b728       420193b27261a8d37b9fb1faeed45094cefa47e72a7538fd5a6c05e8b5ce192e                                                             13 minutes ago      Exited              patch                     2                   f15774b38ba8c       ingress-nginx-admission-patch-lzqrp
	96b039535fe06       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   14 minutes ago      Exited              create                    0                   13108374025b0       ingress-nginx-admission-create-rxj9z
	b0b2fe538d362       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        14 minutes ago      Running             metrics-server            0                   dbcdb7b69735c       metrics-server-84c5f94fbc-dqnhw
	846b4d1bcfbe3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             14 minutes ago      Running             storage-provisioner       0                   a4e85889dbd73       storage-provisioner
	62d73ade94f57       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             14 minutes ago      Running             coredns                   0                   ccac108e74df4       coredns-7c65d6cfc9-r5mdg
	6e1da3a73993a       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                             15 minutes ago      Running             kube-proxy                0                   3929648a8d7f9       kube-proxy-qsbr8
	de10c80270b5c       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                             15 minutes ago      Running             kindnet-cni               0                   107beb5e7b8ce       kindnet-j682f
	1ef3f97eb6473       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                             15 minutes ago      Running             kube-scheduler            0                   9b8411a580ef2       kube-scheduler-addons-133262
	3cf91c4e890ab       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                             15 minutes ago      Running             kube-controller-manager   0                   ed11482c3169e       kube-controller-manager-addons-133262
	9a2762b26053f       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                             15 minutes ago      Running             kube-apiserver            0                   02dbc597f6b2f       kube-apiserver-addons-133262
	227c9772e72a3       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             15 minutes ago      Running             etcd                      0                   7c44e58ec4ddc       etcd-addons-133262
	
	
	==> coredns [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6] <==
	[INFO] 10.244.0.15:58839 - 14543 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000098385s
	[INFO] 10.244.0.15:53549 - 55590 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002790769s
	[INFO] 10.244.0.15:53549 - 53051 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003201599s
	[INFO] 10.244.0.15:57616 - 17518 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0004867s
	[INFO] 10.244.0.15:57616 - 5395 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000831857s
	[INFO] 10.244.0.15:45938 - 8747 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000161046s
	[INFO] 10.244.0.15:45938 - 40758 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000200742s
	[INFO] 10.244.0.15:35197 - 55448 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000055129s
	[INFO] 10.244.0.15:35197 - 11418 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000055506s
	[INFO] 10.244.0.15:55894 - 47736 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100871s
	[INFO] 10.244.0.15:55894 - 56694 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00011304s
	[INFO] 10.244.0.15:44812 - 41796 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001561136s
	[INFO] 10.244.0.15:44812 - 9538 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00191687s
	[INFO] 10.244.0.15:49269 - 61781 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000081385s
	[INFO] 10.244.0.15:49269 - 20566 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000043453s
	[INFO] 10.244.0.20:57660 - 31419 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000212123s
	[INFO] 10.244.0.20:32983 - 51792 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000108314s
	[INFO] 10.244.0.20:49419 - 11345 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00013397s
	[INFO] 10.244.0.20:59959 - 61304 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001039721s
	[INFO] 10.244.0.20:40904 - 968 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127275s
	[INFO] 10.244.0.20:60236 - 53744 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132911s
	[INFO] 10.244.0.20:44058 - 55419 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002363448s
	[INFO] 10.244.0.20:45850 - 62938 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002112385s
	[INFO] 10.244.0.20:37367 - 36922 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001718064s
	[INFO] 10.244.0.20:53861 - 52609 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.002198873s
	
	
	==> describe nodes <==
	Name:               addons-133262
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-133262
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=addons-133262
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T13_25_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-133262
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:25:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-133262
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:41:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:39:03 +0000   Mon, 23 Sep 2024 13:25:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:39:03 +0000   Mon, 23 Sep 2024 13:25:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:39:03 +0000   Mon, 23 Sep 2024 13:25:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:39:03 +0000   Mon, 23 Sep 2024 13:26:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-133262
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 956a9a3790d546e98f478aa431b93546
	  System UUID:                87adfa53-2e43-424b-9596-ae2d9c13f82d
	  Boot ID:                    97839423-83c8-4f76-b1f5-7b978ef1271e
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-zvnjf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  gcp-auth                    gcp-auth-89d5ffd79-sn4tn                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-7c65d6cfc9-r5mdg                 100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 etcd-addons-133262                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-j682f                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-addons-133262             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-133262    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-qsbr8                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-133262             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-dqnhw          100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         15m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 15m   kube-proxy       
	  Normal   Starting                 15m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  15m   kubelet          Node addons-133262 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m   kubelet          Node addons-133262 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m   kubelet          Node addons-133262 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m   node-controller  Node addons-133262 event: Registered Node addons-133262 in Controller
	  Normal   NodeReady                14m   kubelet          Node addons-133262 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep23 13:41] hrtimer: interrupt took 2926293 ns
	
	
	==> etcd [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7] <==
	{"level":"info","ts":"2024-09-23T13:25:21.710353Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:25:21.710990Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:25:21.711900Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T13:25:21.739279Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2024-09-23T13:25:34.842632Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.135341ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-09-23T13:25:34.842748Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.290389ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T13:25:34.842768Z","caller":"traceutil/trace.go:171","msg":"trace[2058929202] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:340; }","duration":"101.314314ms","start":"2024-09-23T13:25:34.741449Z","end":"2024-09-23T13:25:34.842763Z","steps":["trace[2058929202] 'range keys from in-memory index tree'  (duration: 100.503503ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:25:34.842723Z","caller":"traceutil/trace.go:171","msg":"trace[446664898] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:340; }","duration":"101.242874ms","start":"2024-09-23T13:25:34.741467Z","end":"2024-09-23T13:25:34.842710Z","steps":["trace[446664898] 'range keys from in-memory index tree'  (duration: 100.54671ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:25:35.278741Z","caller":"traceutil/trace.go:171","msg":"trace[254033020] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"109.264528ms","start":"2024-09-23T13:25:35.169436Z","end":"2024-09-23T13:25:35.278701Z","steps":["trace[254033020] 'process raft request'  (duration: 25.693032ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:25:35.543061Z","caller":"traceutil/trace.go:171","msg":"trace[1953560179] linearizableReadLoop","detail":"{readStateIndex:358; appliedIndex:358; }","duration":"246.516564ms","start":"2024-09-23T13:25:35.296531Z","end":"2024-09-23T13:25:35.543048Z","steps":["trace[1953560179] 'read index received'  (duration: 246.511993ms)","trace[1953560179] 'applied index is now lower than readState.Index'  (duration: 3.75µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T13:25:35.546513Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.962136ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T13:25:35.549918Z","caller":"traceutil/trace.go:171","msg":"trace[1294112056] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:346; }","duration":"253.35678ms","start":"2024-09-23T13:25:35.296527Z","end":"2024-09-23T13:25:35.549883Z","steps":["trace[1294112056] 'agreement among raft nodes before linearized reading'  (duration: 249.932811ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T13:25:35.585444Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.084397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-23T13:25:35.594064Z","caller":"traceutil/trace.go:171","msg":"trace[876578767] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:347; }","duration":"297.060415ms","start":"2024-09-23T13:25:35.296987Z","end":"2024-09-23T13:25:35.594048Z","steps":["trace[876578767] 'agreement among raft nodes before linearized reading'  (duration: 288.392254ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:25:38.966922Z","caller":"traceutil/trace.go:171","msg":"trace[1773688844] transaction","detail":"{read_only:false; response_revision:683; number_of_response:1; }","duration":"178.146917ms","start":"2024-09-23T13:25:38.788751Z","end":"2024-09-23T13:25:38.966898Z","steps":["trace[1773688844] 'process raft request'  (duration: 178.069176ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:25:38.969539Z","caller":"traceutil/trace.go:171","msg":"trace[208207188] linearizableReadLoop","detail":"{readStateIndex:708; appliedIndex:708; }","duration":"179.399909ms","start":"2024-09-23T13:25:38.790120Z","end":"2024-09-23T13:25:38.969520Z","steps":["trace[208207188] 'read index received'  (duration: 179.395093ms)","trace[208207188] 'applied index is now lower than readState.Index'  (duration: 3.47µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T13:25:38.995483Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.007661ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-133262\" ","response":"range_response_count:1 size:5745"}
	{"level":"info","ts":"2024-09-23T13:25:38.995537Z","caller":"traceutil/trace.go:171","msg":"trace[562549751] range","detail":"{range_begin:/registry/minions/addons-133262; range_end:; response_count:1; response_revision:683; }","duration":"207.069297ms","start":"2024-09-23T13:25:38.788455Z","end":"2024-09-23T13:25:38.995524Z","steps":["trace[562549751] 'agreement among raft nodes before linearized reading'  (duration: 181.122667ms)","trace[562549751] 'range keys from in-memory index tree'  (duration: 25.81627ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T13:25:38.995831Z","caller":"traceutil/trace.go:171","msg":"trace[317156449] transaction","detail":"{read_only:false; response_revision:684; number_of_response:1; }","duration":"198.798657ms","start":"2024-09-23T13:25:38.797023Z","end":"2024-09-23T13:25:38.995821Z","steps":["trace[317156449] 'process raft request'  (duration: 172.804094ms)","trace[317156449] 'compare'  (duration: 25.418387ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T13:35:21.883294Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1525}
	{"level":"info","ts":"2024-09-23T13:35:21.915067Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1525,"took":"31.250229ms","hash":2887434094,"current-db-size-bytes":6610944,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3317760,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-23T13:35:21.915114Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2887434094,"revision":1525,"compact-revision":-1}
	{"level":"info","ts":"2024-09-23T13:40:21.889636Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1941}
	{"level":"info","ts":"2024-09-23T13:40:21.907973Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1941,"took":"17.79001ms","hash":785404094,"current-db-size-bytes":6610944,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":4599808,"current-db-size-in-use":"4.6 MB"}
	{"level":"info","ts":"2024-09-23T13:40:21.908026Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":785404094,"revision":1941,"compact-revision":1525}
	
	
	==> gcp-auth [334680bd78e33f77a791df37c38d964e3d859e5ec3bc4717d639109d0e519646] <==
	2024/09/23 13:28:03 Ready to marshal response ...
	2024/09/23 13:28:03 Ready to write response ...
	2024/09/23 13:36:17 Ready to marshal response ...
	2024/09/23 13:36:17 Ready to write response ...
	2024/09/23 13:36:25 Ready to marshal response ...
	2024/09/23 13:36:25 Ready to write response ...
	2024/09/23 13:36:25 Ready to marshal response ...
	2024/09/23 13:36:25 Ready to write response ...
	2024/09/23 13:36:35 Ready to marshal response ...
	2024/09/23 13:36:35 Ready to write response ...
	2024/09/23 13:37:21 Ready to marshal response ...
	2024/09/23 13:37:21 Ready to write response ...
	2024/09/23 13:37:21 Ready to marshal response ...
	2024/09/23 13:37:21 Ready to write response ...
	2024/09/23 13:37:21 http: TLS handshake error from 10.244.0.1:4500: EOF
	2024/09/23 13:37:21 Ready to marshal response ...
	2024/09/23 13:37:21 Ready to write response ...
	2024/09/23 13:37:44 Ready to marshal response ...
	2024/09/23 13:37:44 Ready to write response ...
	2024/09/23 13:38:07 Ready to marshal response ...
	2024/09/23 13:38:07 Ready to write response ...
	2024/09/23 13:38:35 Ready to marshal response ...
	2024/09/23 13:38:35 Ready to write response ...
	2024/09/23 13:40:57 Ready to marshal response ...
	2024/09/23 13:40:57 Ready to write response ...
	
	
	==> kernel <==
	 13:41:09 up 15:23,  0 users,  load average: 0.10, 0.50, 1.27
	Linux addons-133262 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78] <==
	I0923 13:39:06.083739       1 main.go:299] handling current node
	I0923 13:39:16.082934       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:39:16.082990       1 main.go:299] handling current node
	I0923 13:39:26.083141       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:39:26.083180       1 main.go:299] handling current node
	I0923 13:39:36.083582       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:39:36.083703       1 main.go:299] handling current node
	I0923 13:39:46.083660       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:39:46.083707       1 main.go:299] handling current node
	I0923 13:39:56.083332       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:39:56.083366       1 main.go:299] handling current node
	I0923 13:40:06.083251       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:40:06.083308       1 main.go:299] handling current node
	I0923 13:40:16.083372       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:40:16.083409       1 main.go:299] handling current node
	I0923 13:40:26.082869       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:40:26.082905       1 main.go:299] handling current node
	I0923 13:40:36.082971       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:40:36.083004       1 main.go:299] handling current node
	I0923 13:40:46.082705       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:40:46.082757       1 main.go:299] handling current node
	I0923 13:40:56.082919       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:40:56.082954       1 main.go:299] handling current node
	I0923 13:41:06.083615       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:41:06.083650       1 main.go:299] handling current node
	
	
	==> kube-apiserver [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23] <==
	I0923 13:27:41.883186       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0923 13:36:36.296973       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0923 13:36:36.308652       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0923 13:36:36.324442       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0923 13:36:51.321281       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0923 13:37:21.628239       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.15.199"}
	I0923 13:37:57.080666       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0923 13:38:22.800811       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 13:38:22.800859       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 13:38:22.852138       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 13:38:22.852382       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 13:38:22.905433       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 13:38:22.905588       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 13:38:22.951836       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 13:38:22.951889       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 13:38:23.063669       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 13:38:23.063803       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0923 13:38:23.952514       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0923 13:38:24.064411       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0923 13:38:24.075015       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0923 13:38:29.808252       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 13:38:30.849285       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0923 13:38:35.396198       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0923 13:38:35.713316       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.7.99"}
	I0923 13:40:57.534167       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.25.111"}
	
	
	==> kube-controller-manager [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a] <==
	W0923 13:40:11.312710       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:40:11.312753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 13:40:25.390740       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:40:25.390783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 13:40:27.672179       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:40:27.672223       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 13:40:29.914645       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:40:29.914788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 13:40:57.295067       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="58.093544ms"
	I0923 13:40:57.305236       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.024438ms"
	I0923 13:40:57.317504       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.217335ms"
	I0923 13:40:57.317696       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="57.328µs"
	I0923 13:40:59.655423       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="25.403016ms"
	I0923 13:40:59.656118       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="47.867µs"
	I0923 13:41:00.383039       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0923 13:41:00.390237       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0923 13:41:00.396393       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="8.173µs"
	W0923 13:41:00.767132       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:41:00.767176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 13:41:01.311583       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:41:01.311633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 13:41:02.382157       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:41:02.382197       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 13:41:07.857225       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:41:07.857274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d] <==
	I0923 13:25:36.937749       1 server_linux.go:66] "Using iptables proxy"
	I0923 13:25:37.338915       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 13:25:37.338986       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 13:25:37.413835       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 13:25:37.413972       1 server_linux.go:169] "Using iptables Proxier"
	I0923 13:25:37.415844       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 13:25:37.416398       1 server.go:483] "Version info" version="v1.31.1"
	I0923 13:25:37.416459       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:25:37.423427       1 config.go:199] "Starting service config controller"
	I0923 13:25:37.423523       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 13:25:37.423586       1 config.go:105] "Starting endpoint slice config controller"
	I0923 13:25:37.423616       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 13:25:37.424042       1 config.go:328] "Starting node config controller"
	I0923 13:25:37.424095       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 13:25:37.584302       1 shared_informer.go:320] Caches are synced for node config
	I0923 13:25:37.584359       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 13:25:37.623656       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09] <==
	W0923 13:25:24.580187       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 13:25:24.582008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:24.580220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 13:25:24.582107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:24.580318       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 13:25:24.582203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:24.580371       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 13:25:24.582354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.418271       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 13:25:25.418450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.587950       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 13:25:25.588075       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.616405       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 13:25:25.616450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.642462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 13:25:25.647975       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.666673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 13:25:25.666824       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.673612       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 13:25:25.673747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.684524       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 13:25:25.684652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.718405       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 13:25:25.718452       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 13:25:27.559102       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 13:40:57 addons-133262 kubelet[1502]: E0923 13:40:57.289974    1502 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d19e7839-1016-4e61-ba5e-28b2f0a6c2eb" containerName="gadget"
	Sep 23 13:40:57 addons-133262 kubelet[1502]: I0923 13:40:57.290014    1502 memory_manager.go:354] "RemoveStaleState removing state" podUID="d19e7839-1016-4e61-ba5e-28b2f0a6c2eb" containerName="gadget"
	Sep 23 13:40:57 addons-133262 kubelet[1502]: I0923 13:40:57.290025    1502 memory_manager.go:354] "RemoveStaleState removing state" podUID="d19e7839-1016-4e61-ba5e-28b2f0a6c2eb" containerName="gadget"
	Sep 23 13:40:57 addons-133262 kubelet[1502]: I0923 13:40:57.405721    1502 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt7kd\" (UniqueName: \"kubernetes.io/projected/02c6593f-c4df-41b9-9e1e-87ba5b028a2c-kube-api-access-mt7kd\") pod \"hello-world-app-55bf9c44b4-zvnjf\" (UID: \"02c6593f-c4df-41b9-9e1e-87ba5b028a2c\") " pod="default/hello-world-app-55bf9c44b4-zvnjf"
	Sep 23 13:40:57 addons-133262 kubelet[1502]: I0923 13:40:57.405789    1502 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/02c6593f-c4df-41b9-9e1e-87ba5b028a2c-gcp-creds\") pod \"hello-world-app-55bf9c44b4-zvnjf\" (UID: \"02c6593f-c4df-41b9-9e1e-87ba5b028a2c\") " pod="default/hello-world-app-55bf9c44b4-zvnjf"
	Sep 23 13:40:57 addons-133262 kubelet[1502]: E0923 13:40:57.468482    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098857468104783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563280,},InodesUsed:&UInt64Value{Value:213,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:40:57 addons-133262 kubelet[1502]: E0923 13:40:57.468515    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098857468104783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:563280,},InodesUsed:&UInt64Value{Value:213,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:40:58 addons-133262 kubelet[1502]: I0923 13:40:58.715716    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4nxt\" (UniqueName: \"kubernetes.io/projected/f3f96ece-39b2-4aef-afc3-deeac0208c34-kube-api-access-m4nxt\") pod \"f3f96ece-39b2-4aef-afc3-deeac0208c34\" (UID: \"f3f96ece-39b2-4aef-afc3-deeac0208c34\") "
	Sep 23 13:40:58 addons-133262 kubelet[1502]: I0923 13:40:58.721960    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3f96ece-39b2-4aef-afc3-deeac0208c34-kube-api-access-m4nxt" (OuterVolumeSpecName: "kube-api-access-m4nxt") pod "f3f96ece-39b2-4aef-afc3-deeac0208c34" (UID: "f3f96ece-39b2-4aef-afc3-deeac0208c34"). InnerVolumeSpecName "kube-api-access-m4nxt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 13:40:58 addons-133262 kubelet[1502]: I0923 13:40:58.816983    1502 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-m4nxt\" (UniqueName: \"kubernetes.io/projected/f3f96ece-39b2-4aef-afc3-deeac0208c34-kube-api-access-m4nxt\") on node \"addons-133262\" DevicePath \"\""
	Sep 23 13:40:59 addons-133262 kubelet[1502]: I0923 13:40:59.587024    1502 scope.go:117] "RemoveContainer" containerID="53e63daea4104ad2d4354f727c8d49cad8c4e6d585fd0ce34d3e2fcfc6a907a9"
	Sep 23 13:40:59 addons-133262 kubelet[1502]: I0923 13:40:59.628508    1502 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-zvnjf" podStartSLOduration=1.62666093 podStartE2EDuration="2.62848915s" podCreationTimestamp="2024-09-23 13:40:57 +0000 UTC" firstStartedPulling="2024-09-23 13:40:57.656442647 +0000 UTC m=+931.029843612" lastFinishedPulling="2024-09-23 13:40:58.658270876 +0000 UTC m=+932.031671832" observedRunningTime="2024-09-23 13:40:59.62831482 +0000 UTC m=+933.001715793" watchObservedRunningTime="2024-09-23 13:40:59.62848915 +0000 UTC m=+933.001890107"
	Sep 23 13:41:00 addons-133262 kubelet[1502]: I0923 13:41:00.761020    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89d55b35-a224-4baf-9dd0-123fef61246c" path="/var/lib/kubelet/pods/89d55b35-a224-4baf-9dd0-123fef61246c/volumes"
	Sep 23 13:41:00 addons-133262 kubelet[1502]: I0923 13:41:00.761456    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaa95d01-0938-405a-a34d-6f36d8ab05bd" path="/var/lib/kubelet/pods/aaa95d01-0938-405a-a34d-6f36d8ab05bd/volumes"
	Sep 23 13:41:00 addons-133262 kubelet[1502]: I0923 13:41:00.761787    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3f96ece-39b2-4aef-afc3-deeac0208c34" path="/var/lib/kubelet/pods/f3f96ece-39b2-4aef-afc3-deeac0208c34/volumes"
	Sep 23 13:41:03 addons-133262 kubelet[1502]: I0923 13:41:03.648595    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7d45db68-99b0-41cf-a495-d22b22b643fb-webhook-cert\") pod \"7d45db68-99b0-41cf-a495-d22b22b643fb\" (UID: \"7d45db68-99b0-41cf-a495-d22b22b643fb\") "
	Sep 23 13:41:03 addons-133262 kubelet[1502]: I0923 13:41:03.648658    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8lnp\" (UniqueName: \"kubernetes.io/projected/7d45db68-99b0-41cf-a495-d22b22b643fb-kube-api-access-g8lnp\") pod \"7d45db68-99b0-41cf-a495-d22b22b643fb\" (UID: \"7d45db68-99b0-41cf-a495-d22b22b643fb\") "
	Sep 23 13:41:03 addons-133262 kubelet[1502]: I0923 13:41:03.650869    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d45db68-99b0-41cf-a495-d22b22b643fb-kube-api-access-g8lnp" (OuterVolumeSpecName: "kube-api-access-g8lnp") pod "7d45db68-99b0-41cf-a495-d22b22b643fb" (UID: "7d45db68-99b0-41cf-a495-d22b22b643fb"). InnerVolumeSpecName "kube-api-access-g8lnp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 13:41:03 addons-133262 kubelet[1502]: I0923 13:41:03.651573    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d45db68-99b0-41cf-a495-d22b22b643fb-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7d45db68-99b0-41cf-a495-d22b22b643fb" (UID: "7d45db68-99b0-41cf-a495-d22b22b643fb"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 23 13:41:03 addons-133262 kubelet[1502]: I0923 13:41:03.749069    1502 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-g8lnp\" (UniqueName: \"kubernetes.io/projected/7d45db68-99b0-41cf-a495-d22b22b643fb-kube-api-access-g8lnp\") on node \"addons-133262\" DevicePath \"\""
	Sep 23 13:41:03 addons-133262 kubelet[1502]: I0923 13:41:03.749108    1502 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7d45db68-99b0-41cf-a495-d22b22b643fb-webhook-cert\") on node \"addons-133262\" DevicePath \"\""
	Sep 23 13:41:04 addons-133262 kubelet[1502]: I0923 13:41:04.603925    1502 scope.go:117] "RemoveContainer" containerID="0f16e342a3584ba42fbce6b80690c5642135db641318653f6435f76fc9f8b428"
	Sep 23 13:41:04 addons-133262 kubelet[1502]: I0923 13:41:04.763159    1502 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d45db68-99b0-41cf-a495-d22b22b643fb" path="/var/lib/kubelet/pods/7d45db68-99b0-41cf-a495-d22b22b643fb/volumes"
	Sep 23 13:41:07 addons-133262 kubelet[1502]: E0923 13:41:07.470952    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098867470675057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:41:07 addons-133262 kubelet[1502]: E0923 13:41:07.470988    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098867470675057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [846b4d1bcfbe362e097d8174a0b2808c301ad53a9959a5c8577ae8669f7374d8] <==
	I0923 13:26:17.410205       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 13:26:17.427878       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 13:26:17.428011       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 13:26:17.451710       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 13:26:17.452694       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-133262_255eadcb-81cb-4ff4-8832-d04e319c6694!
	I0923 13:26:17.453702       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1f2b24c6-4123-42bd-a56d-cf65e312df77", APIVersion:"v1", ResourceVersion:"901", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-133262_255eadcb-81cb-4ff4-8832-d04e319c6694 became leader
	I0923 13:26:17.552987       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-133262_255eadcb-81cb-4ff4-8832-d04e319c6694!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-133262 -n addons-133262
helpers_test.go:261: (dbg) Run:  kubectl --context addons-133262 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-133262 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-133262 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-133262/192.168.49.2
	Start Time:       Mon, 23 Sep 2024 13:28:03 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2xb2r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2xb2r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  13m                default-scheduler  Successfully assigned default/busybox to addons-133262
	  Normal   Pulling    11m (x4 over 13m)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     11m (x4 over 13m)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     11m (x4 over 13m)  kubelet            Error: ErrImagePull
	  Warning  Failed     11m (x6 over 13m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m (x43 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.03s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (329.97s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 6.310498ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-dqnhw" [6d7335f6-5dfb-4227-9606-8d8b1b126d40] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003659997s
addons_test.go:413: (dbg) Run:  kubectl --context addons-133262 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-133262 top pods -n kube-system: exit status 1 (95.44362ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-r5mdg, age: 12m11.625181846s

                                                
                                                
** /stderr **
I0923 13:37:43.628669 2383070 retry.go:31] will retry after 3.009474013s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-133262 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-133262 top pods -n kube-system: exit status 1 (165.124776ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-r5mdg, age: 12m14.801090475s

                                                
                                                
** /stderr **
I0923 13:37:46.804219 2383070 retry.go:31] will retry after 2.537863415s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-133262 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-133262 top pods -n kube-system: exit status 1 (138.755195ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-r5mdg, age: 12m17.478208069s

                                                
                                                
** /stderr **
I0923 13:37:49.481702 2383070 retry.go:31] will retry after 4.169419805s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-133262 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-133262 top pods -n kube-system: exit status 1 (93.00015ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-r5mdg, age: 12m21.741705678s

                                                
                                                
** /stderr **
I0923 13:37:53.744996 2383070 retry.go:31] will retry after 14.92861463s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-133262 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-133262 top pods -n kube-system: exit status 1 (103.231589ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-r5mdg, age: 12m36.774426466s

                                                
                                                
** /stderr **
I0923 13:38:08.777905 2383070 retry.go:31] will retry after 19.012351037s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-133262 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-133262 top pods -n kube-system: exit status 1 (85.023524ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-r5mdg, age: 12m55.872312618s

                                                
                                                
** /stderr **
I0923 13:38:27.875617 2383070 retry.go:31] will retry after 18.21988575s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-133262 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-133262 top pods -n kube-system: exit status 1 (97.014177ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-r5mdg, age: 13m14.187060738s

                                                
                                                
** /stderr **
I0923 13:38:46.193361 2383070 retry.go:31] will retry after 18.25707024s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-133262 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-133262 top pods -n kube-system: exit status 1 (89.305557ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-r5mdg, age: 13m32.536908346s

                                                
                                                
** /stderr **
I0923 13:39:04.540099 2383070 retry.go:31] will retry after 42.956198176s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-133262 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-133262 top pods -n kube-system: exit status 1 (85.944349ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-r5mdg, age: 14m15.581114203s

                                                
                                                
** /stderr **
I0923 13:39:47.584251 2383070 retry.go:31] will retry after 1m12.638584807s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-133262 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-133262 top pods -n kube-system: exit status 1 (157.773256ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-r5mdg, age: 15m28.37781886s

                                                
                                                
** /stderr **
I0923 13:41:00.380937 2383070 retry.go:31] will retry after 30.135372045s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-133262 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-133262 top pods -n kube-system: exit status 1 (84.600999ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-r5mdg, age: 15m58.602166446s

                                                
                                                
** /stderr **
I0923 13:41:30.605320 2383070 retry.go:31] will retry after 1m1.859635514s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-133262 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-133262 top pods -n kube-system: exit status 1 (88.011281ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-r5mdg, age: 17m0.555700144s

                                                
                                                
** /stderr **
I0923 13:42:32.559279 2383070 retry.go:31] will retry after 32.565456225s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-133262 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-133262 top pods -n kube-system: exit status 1 (90.751166ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-r5mdg, age: 17m33.212441584s

                                                
                                                
** /stderr **
addons_test.go:427: failed checking metric server: exit status 1
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-133262 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-133262
helpers_test.go:235: (dbg) docker inspect addons-133262:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5025a3e562405ccbb5e57022efe59b9bbe643e70c019e4c06b37590b7afd6f95",
	        "Created": "2024-09-23T13:25:04.273986374Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2384322,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T13:25:04.39615577Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
	        "ResolvConfPath": "/var/lib/docker/containers/5025a3e562405ccbb5e57022efe59b9bbe643e70c019e4c06b37590b7afd6f95/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5025a3e562405ccbb5e57022efe59b9bbe643e70c019e4c06b37590b7afd6f95/hostname",
	        "HostsPath": "/var/lib/docker/containers/5025a3e562405ccbb5e57022efe59b9bbe643e70c019e4c06b37590b7afd6f95/hosts",
	        "LogPath": "/var/lib/docker/containers/5025a3e562405ccbb5e57022efe59b9bbe643e70c019e4c06b37590b7afd6f95/5025a3e562405ccbb5e57022efe59b9bbe643e70c019e4c06b37590b7afd6f95-json.log",
	        "Name": "/addons-133262",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-133262:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-133262",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a338015ce0d4f39570960bfbc498e21bd3d77cc2352e2ecf45c7a1e6bf2501fc-init/diff:/var/lib/docker/overlay2/cb21b5e82393f0d5264c7db3ef721bc402a1fb078a3835cf5b3c87b0c534f7c3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a338015ce0d4f39570960bfbc498e21bd3d77cc2352e2ecf45c7a1e6bf2501fc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a338015ce0d4f39570960bfbc498e21bd3d77cc2352e2ecf45c7a1e6bf2501fc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a338015ce0d4f39570960bfbc498e21bd3d77cc2352e2ecf45c7a1e6bf2501fc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-133262",
	                "Source": "/var/lib/docker/volumes/addons-133262/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-133262",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-133262",
	                "name.minikube.sigs.k8s.io": "addons-133262",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1741029badc86a71140569cf0476e607610316c0823ed37e11befd21a27df5ad",
	            "SandboxKey": "/var/run/docker/netns/1741029badc8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35734"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35735"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35738"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35736"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35737"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-133262": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "32e42fc489c18023f59643e3f9c8a5aaca44c70cab10ea22839173b8efe7a5b0",
	                    "EndpointID": "f553c0425f96879275a6868c4915333e0a9bf18829e579f5bd5a87a9769b40ec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-133262",
	                        "5025a3e56240"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-133262 -n addons-133262
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-133262 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-133262 logs -n 25: (1.534855118s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-496865                                                                     | download-only-496865   | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	| start   | --download-only -p                                                                          | download-docker-237977 | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC |                     |
	|         | download-docker-237977                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-237977                                                                   | download-docker-237977 | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-127301   | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC |                     |
	|         | binary-mirror-127301                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:42465                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-127301                                                                     | binary-mirror-127301   | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	| addons  | enable dashboard -p                                                                         | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC |                     |
	|         | addons-133262                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC |                     |
	|         | addons-133262                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-133262 --wait=true                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:28 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-133262 addons disable                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:36 UTC | 23 Sep 24 13:36 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:36 UTC | 23 Sep 24 13:36 UTC |
	|         | -p addons-133262                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-133262 ssh cat                                                                       | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:36 UTC | 23 Sep 24 13:36 UTC |
	|         | /opt/local-path-provisioner/pvc-ba93c3ca-4ceb-4c2d-8d75-76b896b20b5e_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-133262 addons disable                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:36 UTC | 23 Sep 24 13:37 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-133262 ip                                                                            | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:37 UTC | 23 Sep 24 13:37 UTC |
	| addons  | addons-133262 addons disable                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:37 UTC | 23 Sep 24 13:37 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:37 UTC | 23 Sep 24 13:37 UTC |
	|         | -p addons-133262                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:37 UTC | 23 Sep 24 13:37 UTC |
	|         | addons-133262                                                                               |                        |         |         |                     |                     |
	| addons  | addons-133262 addons disable                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:37 UTC | 23 Sep 24 13:37 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-133262 addons                                                                        | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:38 UTC | 23 Sep 24 13:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-133262 addons                                                                        | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:38 UTC | 23 Sep 24 13:38 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:38 UTC | 23 Sep 24 13:38 UTC |
	|         | addons-133262                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-133262 ssh curl -s                                                                   | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:38 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-133262 ip                                                                            | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:40 UTC | 23 Sep 24 13:40 UTC |
	| addons  | addons-133262 addons disable                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:40 UTC | 23 Sep 24 13:40 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-133262 addons disable                                                                | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:40 UTC | 23 Sep 24 13:41 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-133262 addons                                                                        | addons-133262          | jenkins | v1.34.0 | 23 Sep 24 13:43 UTC | 23 Sep 24 13:43 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:24:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:24:40.364478 2383828 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:24:40.364687 2383828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:24:40.364718 2383828 out.go:358] Setting ErrFile to fd 2...
	I0923 13:24:40.364739 2383828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:24:40.365007 2383828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-2377681/.minikube/bin
	I0923 13:24:40.365476 2383828 out.go:352] Setting JSON to false
	I0923 13:24:40.366420 2383828 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":54423,"bootTime":1727043457,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0923 13:24:40.366518 2383828 start.go:139] virtualization:  
	I0923 13:24:40.368697 2383828 out.go:177] * [addons-133262] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 13:24:40.370555 2383828 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:24:40.370655 2383828 notify.go:220] Checking for updates...
	I0923 13:24:40.373762 2383828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:24:40.375645 2383828 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-2377681/kubeconfig
	I0923 13:24:40.376840 2383828 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-2377681/.minikube
	I0923 13:24:40.378275 2383828 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 13:24:40.379541 2383828 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:24:40.380976 2383828 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:24:40.425606 2383828 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 13:24:40.425734 2383828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:24:40.478465 2383828 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 13:24:40.468583329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:24:40.478577 2383828 docker.go:318] overlay module found
	I0923 13:24:40.480220 2383828 out.go:177] * Using the docker driver based on user configuration
	I0923 13:24:40.481509 2383828 start.go:297] selected driver: docker
	I0923 13:24:40.481524 2383828 start.go:901] validating driver "docker" against <nil>
	I0923 13:24:40.481538 2383828 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:24:40.482184 2383828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:24:40.531533 2383828 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-23 13:24:40.521410022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:24:40.531752 2383828 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 13:24:40.531987 2383828 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:24:40.533357 2383828 out.go:177] * Using Docker driver with root privileges
	I0923 13:24:40.534774 2383828 cni.go:84] Creating CNI manager for ""
	I0923 13:24:40.534836 2383828 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 13:24:40.534848 2383828 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 13:24:40.534944 2383828 start.go:340] cluster config:
	{Name:addons-133262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-133262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:24:40.536381 2383828 out.go:177] * Starting "addons-133262" primary control-plane node in "addons-133262" cluster
	I0923 13:24:40.537851 2383828 cache.go:121] Beginning downloading kic base image for docker with crio
	I0923 13:24:40.539216 2383828 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 13:24:40.540387 2383828 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:24:40.540468 2383828 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0923 13:24:40.540480 2383828 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 13:24:40.540486 2383828 cache.go:56] Caching tarball of preloaded images
	I0923 13:24:40.540576 2383828 preload.go:172] Found /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0923 13:24:40.540587 2383828 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 13:24:40.540932 2383828 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/config.json ...
	I0923 13:24:40.540964 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/config.json: {Name:mk0f11192ff62aa19eaf7345f3142fd23df23f12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:24:40.557194 2383828 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 13:24:40.557302 2383828 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 13:24:40.557321 2383828 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 13:24:40.557327 2383828 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 13:24:40.557334 2383828 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 13:24:40.557340 2383828 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0923 13:24:57.517135 2383828 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0923 13:24:57.517177 2383828 cache.go:194] Successfully downloaded all kic artifacts
	I0923 13:24:57.517208 2383828 start.go:360] acquireMachinesLock for addons-133262: {Name:mkbc92a211fc9b19084838acda6ec6db74ac2de5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:24:57.517340 2383828 start.go:364] duration metric: took 100.034µs to acquireMachinesLock for "addons-133262"
	I0923 13:24:57.517372 2383828 start.go:93] Provisioning new machine with config: &{Name:addons-133262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-133262 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 13:24:57.517487 2383828 start.go:125] createHost starting for "" (driver="docker")
	I0923 13:24:57.519552 2383828 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0923 13:24:57.519788 2383828 start.go:159] libmachine.API.Create for "addons-133262" (driver="docker")
	I0923 13:24:57.519822 2383828 client.go:168] LocalClient.Create starting
	I0923 13:24:57.519927 2383828 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem
	I0923 13:24:57.928803 2383828 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem
	I0923 13:24:58.062903 2383828 cli_runner.go:164] Run: docker network inspect addons-133262 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 13:24:58.077185 2383828 cli_runner.go:211] docker network inspect addons-133262 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 13:24:58.077288 2383828 network_create.go:284] running [docker network inspect addons-133262] to gather additional debugging logs...
	I0923 13:24:58.077309 2383828 cli_runner.go:164] Run: docker network inspect addons-133262
	W0923 13:24:58.092464 2383828 cli_runner.go:211] docker network inspect addons-133262 returned with exit code 1
	I0923 13:24:58.092500 2383828 network_create.go:287] error running [docker network inspect addons-133262]: docker network inspect addons-133262: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-133262 not found
	I0923 13:24:58.092521 2383828 network_create.go:289] output of [docker network inspect addons-133262]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-133262 not found
	
	** /stderr **
	I0923 13:24:58.092643 2383828 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 13:24:58.108933 2383828 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001781250}
	I0923 13:24:58.108976 2383828 network_create.go:124] attempt to create docker network addons-133262 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0923 13:24:58.109032 2383828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-133262 addons-133262
	I0923 13:24:58.181902 2383828 network_create.go:108] docker network addons-133262 192.168.49.0/24 created
	I0923 13:24:58.181937 2383828 kic.go:121] calculated static IP "192.168.49.2" for the "addons-133262" container
	I0923 13:24:58.182008 2383828 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 13:24:58.195905 2383828 cli_runner.go:164] Run: docker volume create addons-133262 --label name.minikube.sigs.k8s.io=addons-133262 --label created_by.minikube.sigs.k8s.io=true
	I0923 13:24:58.210686 2383828 oci.go:103] Successfully created a docker volume addons-133262
	I0923 13:24:58.210778 2383828 cli_runner.go:164] Run: docker run --rm --name addons-133262-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-133262 --entrypoint /usr/bin/test -v addons-133262:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 13:25:00.216144 2383828 cli_runner.go:217] Completed: docker run --rm --name addons-133262-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-133262 --entrypoint /usr/bin/test -v addons-133262:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (2.005304311s)
	I0923 13:25:00.216182 2383828 oci.go:107] Successfully prepared a docker volume addons-133262
	I0923 13:25:00.216215 2383828 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:25:00.216236 2383828 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 13:25:00.216350 2383828 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-133262:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 13:25:04.208435 2383828 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-133262:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.992035598s)
	I0923 13:25:04.208472 2383828 kic.go:203] duration metric: took 3.992232385s to extract preloaded images to volume ...
	W0923 13:25:04.208630 2383828 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0923 13:25:04.208755 2383828 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 13:25:04.259929 2383828 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-133262 --name addons-133262 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-133262 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-133262 --network addons-133262 --ip 192.168.49.2 --volume addons-133262:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0923 13:25:04.567167 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Running}}
	I0923 13:25:04.589203 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:04.612088 2383828 cli_runner.go:164] Run: docker exec addons-133262 stat /var/lib/dpkg/alternatives/iptables
	I0923 13:25:04.695578 2383828 oci.go:144] the created container "addons-133262" has a running status.
	I0923 13:25:04.695609 2383828 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa...
	I0923 13:25:05.137525 2383828 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 13:25:05.169488 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:05.191833 2383828 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 13:25:05.191853 2383828 kic_runner.go:114] Args: [docker exec --privileged addons-133262 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 13:25:05.256602 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:05.283280 2383828 machine.go:93] provisionDockerMachine start ...
	I0923 13:25:05.283429 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:05.305554 2383828 main.go:141] libmachine: Using SSH client type: native
	I0923 13:25:05.305832 2383828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35734 <nil> <nil>}
	I0923 13:25:05.305849 2383828 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:25:05.485763 2383828 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-133262
	
	I0923 13:25:05.485787 2383828 ubuntu.go:169] provisioning hostname "addons-133262"
	I0923 13:25:05.485852 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:05.505809 2383828 main.go:141] libmachine: Using SSH client type: native
	I0923 13:25:05.506049 2383828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35734 <nil> <nil>}
	I0923 13:25:05.506062 2383828 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-133262 && echo "addons-133262" | sudo tee /etc/hostname
	I0923 13:25:05.661069 2383828 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-133262
	
	I0923 13:25:05.661155 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:05.688059 2383828 main.go:141] libmachine: Using SSH client type: native
	I0923 13:25:05.688338 2383828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35734 <nil> <nil>}
	I0923 13:25:05.688355 2383828 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-133262' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-133262/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-133262' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:25:05.822488 2383828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:25:05.822526 2383828 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19690-2377681/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-2377681/.minikube}
	I0923 13:25:05.822550 2383828 ubuntu.go:177] setting up certificates
	I0923 13:25:05.822561 2383828 provision.go:84] configureAuth start
	I0923 13:25:05.822632 2383828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-133262
	I0923 13:25:05.839354 2383828 provision.go:143] copyHostCerts
	I0923 13:25:05.839446 2383828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.pem (1078 bytes)
	I0923 13:25:05.839573 2383828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-2377681/.minikube/cert.pem (1123 bytes)
	I0923 13:25:05.839636 2383828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-2377681/.minikube/key.pem (1679 bytes)
	I0923 13:25:05.839689 2383828 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca-key.pem org=jenkins.addons-133262 san=[127.0.0.1 192.168.49.2 addons-133262 localhost minikube]
	I0923 13:25:06.495243 2383828 provision.go:177] copyRemoteCerts
	I0923 13:25:06.495317 2383828 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:25:06.495387 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:06.514794 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:06.612607 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 13:25:06.638504 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 13:25:06.663621 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 13:25:06.689379 2383828 provision.go:87] duration metric: took 866.80454ms to configureAuth
	I0923 13:25:06.689451 2383828 ubuntu.go:193] setting minikube options for container-runtime
	I0923 13:25:06.689667 2383828 config.go:182] Loaded profile config "addons-133262": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:25:06.689785 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:06.707118 2383828 main.go:141] libmachine: Using SSH client type: native
	I0923 13:25:06.707369 2383828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35734 <nil> <nil>}
	I0923 13:25:06.707392 2383828 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 13:25:06.938544 2383828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 13:25:06.938576 2383828 machine.go:96] duration metric: took 1.655268945s to provisionDockerMachine
	I0923 13:25:06.938587 2383828 client.go:171] duration metric: took 9.418759041s to LocalClient.Create
	I0923 13:25:06.938600 2383828 start.go:167] duration metric: took 9.418812767s to libmachine.API.Create "addons-133262"
	I0923 13:25:06.938608 2383828 start.go:293] postStartSetup for "addons-133262" (driver="docker")
	I0923 13:25:06.938620 2383828 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:25:06.938686 2383828 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:25:06.938731 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:06.956302 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:07.055692 2383828 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:25:07.058884 2383828 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 13:25:07.058918 2383828 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 13:25:07.058931 2383828 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 13:25:07.058938 2383828 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 13:25:07.058953 2383828 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-2377681/.minikube/addons for local assets ...
	I0923 13:25:07.059040 2383828 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-2377681/.minikube/files for local assets ...
	I0923 13:25:07.059075 2383828 start.go:296] duration metric: took 120.460907ms for postStartSetup
	I0923 13:25:07.059396 2383828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-133262
	I0923 13:25:07.076417 2383828 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/config.json ...
	I0923 13:25:07.076731 2383828 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:25:07.076792 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:07.093453 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:07.183072 2383828 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 13:25:07.187501 2383828 start.go:128] duration metric: took 9.669998429s to createHost
	I0923 13:25:07.187526 2383828 start.go:83] releasing machines lock for "addons-133262", held for 9.670170929s
	I0923 13:25:07.187597 2383828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-133262
	I0923 13:25:07.203630 2383828 ssh_runner.go:195] Run: cat /version.json
	I0923 13:25:07.203673 2383828 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 13:25:07.203683 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:07.203744 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:07.223131 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:07.234414 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:07.436803 2383828 ssh_runner.go:195] Run: systemctl --version
	I0923 13:25:07.441288 2383828 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 13:25:07.583937 2383828 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 13:25:07.588356 2383828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:25:07.611186 2383828 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0923 13:25:07.611279 2383828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:25:07.642594 2383828 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0923 13:25:07.642666 2383828 start.go:495] detecting cgroup driver to use...
	I0923 13:25:07.642718 2383828 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 13:25:07.642799 2383828 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 13:25:07.659158 2383828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:25:07.670791 2383828 docker.go:217] disabling cri-docker service (if available) ...
	I0923 13:25:07.670915 2383828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 13:25:07.685963 2383828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 13:25:07.700410 2383828 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 13:25:07.793728 2383828 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 13:25:07.888156 2383828 docker.go:233] disabling docker service ...
	I0923 13:25:07.888238 2383828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 13:25:07.908488 2383828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 13:25:07.920988 2383828 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 13:25:08.011802 2383828 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 13:25:08.116061 2383828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 13:25:08.127456 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:25:08.144788 2383828 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 13:25:08.144859 2383828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.155741 2383828 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 13:25:08.155815 2383828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.166342 2383828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.176318 2383828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.186297 2383828 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:25:08.195794 2383828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.205821 2383828 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.222517 2383828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:25:08.232461 2383828 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:25:08.241712 2383828 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:25:08.250384 2383828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:25:08.337916 2383828 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 13:25:08.443675 2383828 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 13:25:08.443763 2383828 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 13:25:08.447871 2383828 start.go:563] Will wait 60s for crictl version
	I0923 13:25:08.447976 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:25:08.451632 2383828 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:25:08.495719 2383828 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0923 13:25:08.495829 2383828 ssh_runner.go:195] Run: crio --version
	I0923 13:25:08.534184 2383828 ssh_runner.go:195] Run: crio --version
	I0923 13:25:08.574119 2383828 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0923 13:25:08.575986 2383828 cli_runner.go:164] Run: docker network inspect addons-133262 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 13:25:08.591880 2383828 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 13:25:08.595405 2383828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:25:08.606218 2383828 kubeadm.go:883] updating cluster {Name:addons-133262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-133262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 13:25:08.606418 2383828 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:25:08.606486 2383828 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:25:08.683043 2383828 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 13:25:08.683069 2383828 crio.go:433] Images already preloaded, skipping extraction
	I0923 13:25:08.683126 2383828 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:25:08.718285 2383828 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 13:25:08.718324 2383828 cache_images.go:84] Images are preloaded, skipping loading
	I0923 13:25:08.718333 2383828 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0923 13:25:08.718438 2383828 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-133262 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-133262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:25:08.718527 2383828 ssh_runner.go:195] Run: crio config
	I0923 13:25:08.764315 2383828 cni.go:84] Creating CNI manager for ""
	I0923 13:25:08.764337 2383828 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 13:25:08.764348 2383828 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 13:25:08.764370 2383828 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-133262 NodeName:addons-133262 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 13:25:08.764526 2383828 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-133262"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 13:25:08.764603 2383828 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:25:08.773406 2383828 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:25:08.773479 2383828 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 13:25:08.782241 2383828 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0923 13:25:08.800013 2383828 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:25:08.818404 2383828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0923 13:25:08.836149 2383828 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 13:25:08.839708 2383828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:25:08.850762 2383828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:25:08.932670 2383828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:25:08.946645 2383828 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262 for IP: 192.168.49.2
	I0923 13:25:08.946664 2383828 certs.go:194] generating shared ca certs ...
	I0923 13:25:08.946681 2383828 certs.go:226] acquiring lock for ca certs: {Name:mka74fca5f9586bfec26165232a0abe6b9527b85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:08.946856 2383828 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key
	I0923 13:25:09.534535 2383828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt ...
	I0923 13:25:09.534569 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt: {Name:mkd6669f44b9a5690ab69d1191d9d59bfa475998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:09.534806 2383828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key ...
	I0923 13:25:09.534822 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key: {Name:mkcb9f518a9706e806f1e3ce2b21f17dd1ea4af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:09.535463 2383828 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key
	I0923 13:25:09.881577 2383828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.crt ...
	I0923 13:25:09.881615 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.crt: {Name:mkfe3b6cdbf84ec160efdee677ace7ad97157d47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:09.881813 2383828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key ...
	I0923 13:25:09.881828 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key: {Name:mkfb51a840155a14a8cc8bb45048279f9c0b2777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:09.881912 2383828 certs.go:256] generating profile certs ...
	I0923 13:25:09.882006 2383828 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.key
	I0923 13:25:09.882034 2383828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt with IP's: []
	I0923 13:25:10.566644 2383828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt ...
	I0923 13:25:10.566674 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: {Name:mkd81ca15f11b2786974e7876e3c9aed3e2d4234 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:10.567469 2383828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.key ...
	I0923 13:25:10.567490 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.key: {Name:mk6021386003345160ab870bf118db0d5b101e3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:10.567623 2383828 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.key.5c5d0912
	I0923 13:25:10.567648 2383828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.crt.5c5d0912 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0923 13:25:10.852497 2383828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.crt.5c5d0912 ...
	I0923 13:25:10.852533 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.crt.5c5d0912: {Name:mk7f27ae99622d8c8fa852d7ef4a1bd4d1377cc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:10.853247 2383828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.key.5c5d0912 ...
	I0923 13:25:10.853270 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.key.5c5d0912: {Name:mke5687c64d611e598a2d4dfa2e1b457cefad09e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:10.853768 2383828 certs.go:381] copying /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.crt.5c5d0912 -> /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.crt
	I0923 13:25:10.853857 2383828 certs.go:385] copying /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.key.5c5d0912 -> /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.key
	I0923 13:25:10.853920 2383828 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.key
	I0923 13:25:10.853944 2383828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.crt with IP's: []
	I0923 13:25:11.253287 2383828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.crt ...
	I0923 13:25:11.253320 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.crt: {Name:mkec361222a939c4fff7d39836686e89c78445d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:11.253510 2383828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.key ...
	I0923 13:25:11.253524 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.key: {Name:mkd82ad2e44c4406a63509e86866460eeda368df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:11.253710 2383828 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 13:25:11.253753 2383828 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem (1078 bytes)
	I0923 13:25:11.253784 2383828 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem (1123 bytes)
	I0923 13:25:11.253812 2383828 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem (1679 bytes)
	I0923 13:25:11.254465 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:25:11.280459 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 13:25:11.308504 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:25:11.341407 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:25:11.365448 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 13:25:11.390204 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 13:25:11.414590 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 13:25:11.439501 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 13:25:11.463335 2383828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:25:11.488243 2383828 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 13:25:11.506147 2383828 ssh_runner.go:195] Run: openssl version
	I0923 13:25:11.511692 2383828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:25:11.521261 2383828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:25:11.524826 2383828 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 13:25 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:25:11.524943 2383828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:25:11.532134 2383828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:25:11.541360 2383828 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:25:11.544583 2383828 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 13:25:11.544637 2383828 kubeadm.go:392] StartCluster: {Name:addons-133262 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-133262 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:25:11.544720 2383828 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 13:25:11.544790 2383828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 13:25:11.581094 2383828 cri.go:89] found id: ""
	I0923 13:25:11.581187 2383828 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 13:25:11.590237 2383828 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 13:25:11.599295 2383828 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0923 13:25:11.599391 2383828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 13:25:11.608400 2383828 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 13:25:11.608424 2383828 kubeadm.go:157] found existing configuration files:
	
	I0923 13:25:11.608478 2383828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 13:25:11.617384 2383828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 13:25:11.617458 2383828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 13:25:11.626442 2383828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 13:25:11.635222 2383828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 13:25:11.635294 2383828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 13:25:11.643984 2383828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 13:25:11.653034 2383828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 13:25:11.653121 2383828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 13:25:11.661943 2383828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 13:25:11.670520 2383828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 13:25:11.670582 2383828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 13:25:11.678902 2383828 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0923 13:25:11.719171 2383828 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 13:25:11.719491 2383828 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 13:25:11.740162 2383828 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0923 13:25:11.740239 2383828 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0923 13:25:11.740288 2383828 kubeadm.go:310] OS: Linux
	I0923 13:25:11.740344 2383828 kubeadm.go:310] CGROUPS_CPU: enabled
	I0923 13:25:11.740396 2383828 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0923 13:25:11.740445 2383828 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0923 13:25:11.740496 2383828 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0923 13:25:11.740549 2383828 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0923 13:25:11.740599 2383828 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0923 13:25:11.740647 2383828 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0923 13:25:11.740698 2383828 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0923 13:25:11.740747 2383828 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0923 13:25:11.804353 2383828 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 13:25:11.804468 2383828 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 13:25:11.804565 2383828 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 13:25:11.811498 2383828 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 13:25:11.813835 2383828 out.go:235]   - Generating certificates and keys ...
	I0923 13:25:11.814031 2383828 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 13:25:11.814147 2383828 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 13:25:12.062735 2383828 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 13:25:12.591731 2383828 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 13:25:13.268376 2383828 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 13:25:13.777588 2383828 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 13:25:14.367839 2383828 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 13:25:14.368150 2383828 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-133262 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 13:25:14.571927 2383828 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 13:25:14.572261 2383828 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-133262 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 13:25:14.938024 2383828 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 13:25:15.818972 2383828 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 13:25:16.397788 2383828 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 13:25:16.398106 2383828 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 13:25:16.811849 2383828 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 13:25:17.440724 2383828 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 13:25:18.228845 2383828 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 13:25:18.373394 2383828 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 13:25:18.887331 2383828 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 13:25:18.888146 2383828 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 13:25:18.891236 2383828 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 13:25:18.893066 2383828 out.go:235]   - Booting up control plane ...
	I0923 13:25:18.893163 2383828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 13:25:18.893238 2383828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 13:25:18.894026 2383828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 13:25:18.904186 2383828 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 13:25:18.910454 2383828 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 13:25:18.910511 2383828 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 13:25:19.004454 2383828 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 13:25:19.004576 2383828 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 13:25:20.505668 2383828 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501072601s
	I0923 13:25:20.505759 2383828 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 13:25:26.007712 2383828 kubeadm.go:310] [api-check] The API server is healthy after 5.502311988s
	I0923 13:25:26.031158 2383828 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 13:25:26.046565 2383828 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 13:25:26.076539 2383828 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 13:25:26.076736 2383828 kubeadm.go:310] [mark-control-plane] Marking the node addons-133262 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 13:25:26.087778 2383828 kubeadm.go:310] [bootstrap-token] Using token: kkrgrl.3o8iief7llcjzdwt
	I0923 13:25:26.090470 2383828 out.go:235]   - Configuring RBAC rules ...
	I0923 13:25:26.090609 2383828 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 13:25:26.096407 2383828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 13:25:26.106960 2383828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 13:25:26.110745 2383828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 13:25:26.114782 2383828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 13:25:26.119709 2383828 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 13:25:26.414947 2383828 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 13:25:26.846545 2383828 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 13:25:27.414986 2383828 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 13:25:27.416208 2383828 kubeadm.go:310] 
	I0923 13:25:27.416286 2383828 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 13:25:27.416296 2383828 kubeadm.go:310] 
	I0923 13:25:27.416373 2383828 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 13:25:27.416383 2383828 kubeadm.go:310] 
	I0923 13:25:27.416408 2383828 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 13:25:27.416469 2383828 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 13:25:27.416523 2383828 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 13:25:27.416531 2383828 kubeadm.go:310] 
	I0923 13:25:27.416593 2383828 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 13:25:27.416602 2383828 kubeadm.go:310] 
	I0923 13:25:27.416649 2383828 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 13:25:27.416657 2383828 kubeadm.go:310] 
	I0923 13:25:27.416707 2383828 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 13:25:27.416784 2383828 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 13:25:27.416855 2383828 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 13:25:27.416864 2383828 kubeadm.go:310] 
	I0923 13:25:27.416947 2383828 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 13:25:27.417026 2383828 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 13:25:27.417034 2383828 kubeadm.go:310] 
	I0923 13:25:27.417117 2383828 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kkrgrl.3o8iief7llcjzdwt \
	I0923 13:25:27.417221 2383828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc25ddfa50091362c7bfdbe09ed12c0b94b944390ba1bf979075d78a22051d17 \
	I0923 13:25:27.417246 2383828 kubeadm.go:310] 	--control-plane 
	I0923 13:25:27.417251 2383828 kubeadm.go:310] 
	I0923 13:25:27.417334 2383828 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 13:25:27.417344 2383828 kubeadm.go:310] 
	I0923 13:25:27.417424 2383828 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kkrgrl.3o8iief7llcjzdwt \
	I0923 13:25:27.417529 2383828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc25ddfa50091362c7bfdbe09ed12c0b94b944390ba1bf979075d78a22051d17 
	I0923 13:25:27.421442 2383828 kubeadm.go:310] W0923 13:25:11.715767    1187 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:25:27.421763 2383828 kubeadm.go:310] W0923 13:25:11.716771    1187 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:25:27.421999 2383828 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0923 13:25:27.422114 2383828 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 13:25:27.422211 2383828 cni.go:84] Creating CNI manager for ""
	I0923 13:25:27.422223 2383828 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 13:25:27.424992 2383828 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 13:25:27.427655 2383828 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 13:25:27.434913 2383828 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 13:25:27.434938 2383828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 13:25:27.453393 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 13:25:27.737776 2383828 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 13:25:27.737920 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:27.738003 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-133262 minikube.k8s.io/updated_at=2024_09_23T13_25_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=addons-133262 minikube.k8s.io/primary=true
	I0923 13:25:27.874905 2383828 ops.go:34] apiserver oom_adj: -16
	I0923 13:25:27.875025 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:28.375543 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:28.875477 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:29.375599 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:29.875835 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:30.375081 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:30.876003 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:31.375136 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:31.875141 2383828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:25:31.987664 2383828 kubeadm.go:1113] duration metric: took 4.24984179s to wait for elevateKubeSystemPrivileges
	I0923 13:25:31.987703 2383828 kubeadm.go:394] duration metric: took 20.443068903s to StartCluster
	I0923 13:25:31.987722 2383828 settings.go:142] acquiring lock: {Name:mkec0ac22c7afe2712cd8676389ce937f473d18b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:31.987847 2383828 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19690-2377681/kubeconfig
	I0923 13:25:31.988235 2383828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/kubeconfig: {Name:mk1c3c49c69db07ab1c6462bef79c6f07c9c4b4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:25:31.988441 2383828 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 13:25:31.988585 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 13:25:31.988829 2383828 config.go:182] Loaded profile config "addons-133262": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:25:31.988864 2383828 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 13:25:31.988948 2383828 addons.go:69] Setting yakd=true in profile "addons-133262"
	I0923 13:25:31.988966 2383828 addons.go:234] Setting addon yakd=true in "addons-133262"
	I0923 13:25:31.988994 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:31.989510 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.989971 2383828 addons.go:69] Setting cloud-spanner=true in profile "addons-133262"
	I0923 13:25:31.989993 2383828 addons.go:234] Setting addon cloud-spanner=true in "addons-133262"
	I0923 13:25:31.990019 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:31.990092 2383828 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-133262"
	I0923 13:25:31.990110 2383828 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-133262"
	I0923 13:25:31.990135 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:31.990504 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.990578 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.994496 2383828 addons.go:69] Setting registry=true in profile "addons-133262"
	I0923 13:25:31.994563 2383828 addons.go:234] Setting addon registry=true in "addons-133262"
	I0923 13:25:31.994616 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:31.995146 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.995279 2383828 addons.go:69] Setting storage-provisioner=true in profile "addons-133262"
	I0923 13:25:31.996290 2383828 addons.go:234] Setting addon storage-provisioner=true in "addons-133262"
	I0923 13:25:31.996326 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:31.996775 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.999264 2383828 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-133262"
	I0923 13:25:31.999371 2383828 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-133262"
	I0923 13:25:31.999947 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.005578 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.999500 2383828 addons.go:69] Setting default-storageclass=true in profile "addons-133262"
	I0923 13:25:32.007965 2383828 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-133262"
	I0923 13:25:32.008523 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.995298 2383828 addons.go:69] Setting volcano=true in profile "addons-133262"
	I0923 13:25:32.012948 2383828 addons.go:234] Setting addon volcano=true in "addons-133262"
	I0923 13:25:32.013004 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.013497 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.995305 2383828 addons.go:69] Setting volumesnapshots=true in profile "addons-133262"
	I0923 13:25:32.028502 2383828 addons.go:234] Setting addon volumesnapshots=true in "addons-133262"
	I0923 13:25:32.028573 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.029136 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.999508 2383828 addons.go:69] Setting gcp-auth=true in profile "addons-133262"
	I0923 13:25:32.052306 2383828 mustload.go:65] Loading cluster: addons-133262
	I0923 13:25:32.052517 2383828 config.go:182] Loaded profile config "addons-133262": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:25:32.052788 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.999514 2383828 addons.go:69] Setting ingress=true in profile "addons-133262"
	I0923 13:25:32.070953 2383828 addons.go:234] Setting addon ingress=true in "addons-133262"
	I0923 13:25:32.071007 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.071476 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.999518 2383828 addons.go:69] Setting ingress-dns=true in profile "addons-133262"
	I0923 13:25:32.092899 2383828 addons.go:234] Setting addon ingress-dns=true in "addons-133262"
	I0923 13:25:32.092953 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.093443 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:32.109354 2383828 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 13:25:32.114598 2383828 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 13:25:32.114680 2383828 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 13:25:32.114765 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:31.999521 2383828 addons.go:69] Setting inspektor-gadget=true in profile "addons-133262"
	I0923 13:25:32.118436 2383828 addons.go:234] Setting addon inspektor-gadget=true in "addons-133262"
	I0923 13:25:32.118545 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:31.999525 2383828 addons.go:69] Setting metrics-server=true in profile "addons-133262"
	I0923 13:25:32.121769 2383828 addons.go:234] Setting addon metrics-server=true in "addons-133262"
	I0923 13:25:32.121814 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.122333 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:31.999534 2383828 out.go:177] * Verifying Kubernetes components...
	I0923 13:25:32.135133 2383828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:25:31.995291 2383828 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-133262"
	I0923 13:25:32.135493 2383828 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-133262"
	I0923 13:25:32.135848 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:32.160968 2383828 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 13:25:32.165055 2383828 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 13:25:32.165077 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 13:25:32.165152 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.202702 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 13:25:32.205673 2383828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 13:25:32.205756 2383828 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 13:25:32.205880 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.211247 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 13:25:32.214845 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 13:25:32.219909 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 13:25:32.222637 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 13:25:32.242770 2383828 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 13:25:32.250532 2383828 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 13:25:32.250557 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 13:25:32.250646 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.262838 2383828 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 13:25:32.263472 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	W0923 13:25:32.266501 2383828 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0923 13:25:32.277288 2383828 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 13:25:32.277562 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 13:25:32.277651 2383828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 13:25:32.279781 2383828 addons.go:234] Setting addon default-storageclass=true in "addons-133262"
	I0923 13:25:32.279819 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.282986 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:32.285904 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 13:25:32.289027 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 13:25:32.289071 2383828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 13:25:32.289082 2383828 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 13:25:32.289194 2383828 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 13:25:32.299635 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 13:25:32.299715 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.317274 2383828 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 13:25:32.317462 2383828 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-133262"
	I0923 13:25:32.317498 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.317931 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:32.318077 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 13:25:32.319694 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:32.325032 2383828 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 13:25:32.325087 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 13:25:32.325170 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.359357 2383828 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 13:25:32.361120 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 13:25:32.361147 2383828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 13:25:32.361222 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.361396 2383828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 13:25:32.365605 2383828 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 13:25:32.365642 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 13:25:32.365710 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.398727 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.402381 2383828 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 13:25:32.402408 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 13:25:32.402473 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.417359 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.428109 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.436811 2383828 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 13:25:32.439528 2383828 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 13:25:32.439555 2383828 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 13:25:32.439632 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.506455 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.521998 2383828 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 13:25:32.525970 2383828 out.go:177]   - Using image docker.io/busybox:stable
	I0923 13:25:32.529488 2383828 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 13:25:32.529517 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 13:25:32.529582 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.529774 2383828 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 13:25:32.532730 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 13:25:32.532755 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 13:25:32.532824 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.540460 2383828 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 13:25:32.540480 2383828 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 13:25:32.540539 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:32.540761 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.544187 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.566704 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.583136 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.588518 2383828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:25:32.619886 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.656566 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.657174 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.665769 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:32.672132 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	W0923 13:25:32.672854 2383828 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 13:25:32.672878 2383828 retry.go:31] will retry after 251.380216ms: ssh: handshake failed: EOF
	I0923 13:25:32.834603 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 13:25:32.952650 2383828 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 13:25:32.952726 2383828 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 13:25:32.958869 2383828 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 13:25:32.958945 2383828 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 13:25:32.981074 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 13:25:33.003665 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 13:25:33.003753 2383828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 13:25:33.020302 2383828 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 13:25:33.020395 2383828 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 13:25:33.064932 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 13:25:33.071188 2383828 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 13:25:33.071266 2383828 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 13:25:33.093040 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 13:25:33.096895 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 13:25:33.116925 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 13:25:33.118205 2383828 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 13:25:33.118262 2383828 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 13:25:33.127649 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 13:25:33.151493 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 13:25:33.151517 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 13:25:33.173138 2383828 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 13:25:33.173162 2383828 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 13:25:33.186149 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 13:25:33.186172 2383828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 13:25:33.202184 2383828 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 13:25:33.202204 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 13:25:33.247710 2383828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 13:25:33.247785 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 13:25:33.273067 2383828 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 13:25:33.273142 2383828 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 13:25:33.288177 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 13:25:33.288259 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 13:25:33.305420 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 13:25:33.305494 2383828 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 13:25:33.353173 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 13:25:33.368265 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 13:25:33.368342 2383828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 13:25:33.437059 2383828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 13:25:33.437132 2383828 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 13:25:33.440876 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 13:25:33.440949 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 13:25:33.449345 2383828 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 13:25:33.449418 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 13:25:33.473562 2383828 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 13:25:33.473637 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 13:25:33.523594 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 13:25:33.523675 2383828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 13:25:33.583312 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 13:25:33.613866 2383828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 13:25:33.613944 2383828 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 13:25:33.617882 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 13:25:33.617946 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 13:25:33.652467 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 13:25:33.681957 2383828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 13:25:33.682035 2383828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 13:25:33.690387 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 13:25:33.710624 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 13:25:33.710702 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 13:25:33.780743 2383828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 13:25:33.780817 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 13:25:33.815507 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 13:25:33.815588 2383828 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 13:25:33.857088 2383828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 13:25:33.857166 2383828 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 13:25:33.918017 2383828 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 13:25:33.918092 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 13:25:33.929357 2383828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 13:25:33.929432 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 13:25:33.979747 2383828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 13:25:33.979822 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 13:25:33.983608 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 13:25:34.037007 2383828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 13:25:34.037089 2383828 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 13:25:34.151213 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 13:25:35.781487 2383828 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.192933328s)
	I0923 13:25:35.782545 2383828 node_ready.go:35] waiting up to 6m0s for node "addons-133262" to be "Ready" ...
	I0923 13:25:35.782865 2383828 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.496873728s)
	I0923 13:25:35.782924 2383828 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0923 13:25:36.428913 2383828 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-133262" context rescaled to 1 replicas
	I0923 13:25:36.462382 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.627742097s)
	I0923 13:25:37.802089 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:38.409822 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.428662395s)
	I0923 13:25:38.409900 2383828 addons.go:475] Verifying addon ingress=true in "addons-133262"
	I0923 13:25:38.410127 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.345169486s)
	I0923 13:25:38.410241 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.317116518s)
	I0923 13:25:38.410368 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.293369763s)
	I0923 13:25:38.410583 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.282860168s)
	I0923 13:25:38.410697 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.057460055s)
	I0923 13:25:38.410709 2383828 addons.go:475] Verifying addon registry=true in "addons-133262"
	I0923 13:25:38.410817 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.313378103s)
	I0923 13:25:38.410987 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.827495446s)
	I0923 13:25:38.411193 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.758634316s)
	W0923 13:25:38.412175 2383828 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 13:25:38.412202 2383828 retry.go:31] will retry after 192.996519ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 13:25:38.411249 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.720791712s)
	I0923 13:25:38.412240 2383828 addons.go:475] Verifying addon metrics-server=true in "addons-133262"
	I0923 13:25:38.411301 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.42762092s)
	I0923 13:25:38.413014 2383828 out.go:177] * Verifying ingress addon...
	I0923 13:25:38.413041 2383828 out.go:177] * Verifying registry addon...
	I0923 13:25:38.414885 2383828 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-133262 service yakd-dashboard -n yakd-dashboard
	
	I0923 13:25:38.419102 2383828 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 13:25:38.419832 2383828 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0923 13:25:38.460388 2383828 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0923 13:25:38.463085 2383828 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 13:25:38.463119 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:38.463335 2383828 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 13:25:38.463348 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:38.606142 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 13:25:39.005138 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:39.021026 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:39.118283 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.966973654s)
	I0923 13:25:39.118406 2383828 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-133262"
	I0923 13:25:39.121268 2383828 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 13:25:39.124770 2383828 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 13:25:39.156704 2383828 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 13:25:39.156770 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:39.439937 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:39.444350 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:39.640632 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:39.925058 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:39.925531 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:39.971775 2383828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.365584799s)
	I0923 13:25:40.129039 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:40.286956 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:40.425951 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:40.427311 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:40.630301 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:40.924856 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:40.925869 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:41.129822 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:41.425406 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:41.425833 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:41.629752 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:41.926255 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:41.927436 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:42.132576 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:42.424685 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:42.424871 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:42.635312 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:42.637181 2383828 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 13:25:42.637349 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:42.660570 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:42.775194 2383828 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 13:25:42.787736 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:42.799009 2383828 addons.go:234] Setting addon gcp-auth=true in "addons-133262"
	I0923 13:25:42.799068 2383828 host.go:66] Checking if "addons-133262" exists ...
	I0923 13:25:42.799666 2383828 cli_runner.go:164] Run: docker container inspect addons-133262 --format={{.State.Status}}
	I0923 13:25:42.819110 2383828 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 13:25:42.819169 2383828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-133262
	I0923 13:25:42.837017 2383828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35734 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/addons-133262/id_rsa Username:docker}
	I0923 13:25:42.928031 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:42.928785 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:42.943532 2383828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 13:25:42.946272 2383828 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 13:25:42.948939 2383828 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 13:25:42.948964 2383828 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 13:25:42.967771 2383828 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 13:25:42.967799 2383828 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 13:25:42.986757 2383828 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 13:25:42.986781 2383828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 13:25:43.007805 2383828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 13:25:43.133188 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:43.440942 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:43.448247 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:43.598168 2383828 addons.go:475] Verifying addon gcp-auth=true in "addons-133262"
	I0923 13:25:43.600804 2383828 out.go:177] * Verifying gcp-auth addon...
	I0923 13:25:43.604541 2383828 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 13:25:43.614384 2383828 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 13:25:43.614417 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:43.714958 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:43.927260 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:43.928296 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:44.108166 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:44.129766 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:44.425907 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:44.428947 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:44.608989 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:44.629165 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:44.924442 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:44.924911 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:45.109621 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:45.134831 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:45.286174 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:45.423899 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:45.424213 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:45.608699 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:45.630848 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:45.923717 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:45.924806 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:46.108554 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:46.134002 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:46.423457 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:46.423949 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:46.607763 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:46.628666 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:46.923946 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:46.924341 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:47.108334 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:47.128936 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:47.424042 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:47.425089 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:47.608593 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:47.628453 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:47.786101 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:47.924546 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:47.925407 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:48.107567 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:48.129020 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:48.424760 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:48.425682 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:48.607946 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:48.629119 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:48.923346 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:48.924113 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:49.107465 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:49.128820 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:49.423331 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:49.424397 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:49.609143 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:49.628320 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:49.786566 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:49.924514 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:49.924812 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:50.108212 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:50.128656 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:50.423917 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:50.426088 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:50.607776 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:50.627970 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:50.923145 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:50.923993 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:51.108698 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:51.129331 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:51.424158 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:51.424921 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:51.607952 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:51.628227 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:51.923369 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:51.924228 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:52.107969 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:52.129521 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:52.286509 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:52.424000 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:52.424964 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:52.608383 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:52.628653 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:52.924655 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:52.925393 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:53.108542 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:53.129828 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:53.424003 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:53.424995 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:53.608550 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:53.629037 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:53.923760 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:53.924375 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:54.108444 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:54.128575 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:54.424136 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:54.424452 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:54.608327 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:54.628015 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:54.786192 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:54.924376 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:54.925390 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:55.108016 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:55.129009 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:55.424425 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:55.424771 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:55.608044 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:55.628274 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:55.924611 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:55.925486 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:56.108074 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:56.128941 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:56.423602 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:56.424012 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:56.607850 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:56.628723 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:56.786783 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:56.923439 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:56.924812 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:57.108314 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:57.128666 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:57.423521 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:57.424105 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:57.607781 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:57.628448 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:57.925107 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:57.926033 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:58.108563 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:58.128779 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:58.423804 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:58.424364 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:58.607922 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:58.628710 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:58.923609 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:58.924488 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:59.107804 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:59.128570 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:59.285918 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:25:59.424134 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:25:59.424388 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:59.607622 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:25:59.628423 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:25:59.923463 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:25:59.925039 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:00.109338 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:00.130159 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:00.423701 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:00.424594 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:00.607942 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:00.629162 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:00.924187 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:00.924521 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:01.114269 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:01.132902 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:01.286067 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:01.424165 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:01.424989 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:01.608584 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:01.628626 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:01.924141 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:01.925153 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:02.109683 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:02.129574 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:02.425376 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:02.427118 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:02.608407 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:02.628353 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:02.927784 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:02.929710 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:03.108803 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:03.128259 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:03.286985 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:03.423996 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:03.425124 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:03.607627 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:03.628770 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:03.924209 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:03.925264 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:04.107519 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:04.128969 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:04.424199 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:04.425254 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:04.607825 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:04.628956 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:04.923561 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:04.924425 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:05.108488 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:05.129008 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:05.422992 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:05.424323 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:05.607517 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:05.629014 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:05.786404 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:05.924275 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:05.925412 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:06.108780 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:06.127963 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:06.423656 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:06.424949 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:06.608371 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:06.628604 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:06.924164 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:06.924953 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:07.108615 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:07.129007 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:07.424171 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:07.424999 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:07.608886 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:07.628756 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:07.786544 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:07.926683 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:07.929614 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:08.108197 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:08.129087 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:08.423866 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:08.424407 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:08.608426 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:08.628441 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:08.924009 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:08.924680 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:09.108487 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:09.128421 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:09.423250 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:09.424626 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:09.607764 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:09.628296 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:09.923752 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:09.924369 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:10.108282 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:10.128775 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:10.286079 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:10.423330 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:10.424419 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:10.608417 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:10.628309 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:10.924180 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:10.925450 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:11.107825 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:11.128085 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:11.423818 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:11.424888 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:11.607701 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:11.628640 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:11.923640 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:11.924245 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:12.108061 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:12.129105 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:12.287506 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:12.424922 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:12.425373 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:12.608172 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:12.628478 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:12.924499 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:12.925567 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:13.107976 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:13.128509 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:13.425009 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:13.425224 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:13.608323 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:13.628636 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:13.923641 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:13.924617 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:14.107745 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:14.127838 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:14.423884 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:14.424011 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:14.608057 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:14.628329 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:14.785799 2383828 node_ready.go:53] node "addons-133262" has status "Ready":"False"
	I0923 13:26:14.924654 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:14.925939 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:15.109613 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:15.128760 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:15.424208 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:15.425109 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:15.608425 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:15.628445 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:15.923911 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:15.925392 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:16.108115 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:16.130852 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:16.424605 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:16.425002 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:16.619021 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:16.636992 2383828 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 13:26:16.637020 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:16.805123 2383828 node_ready.go:49] node "addons-133262" has status "Ready":"True"
	I0923 13:26:16.805149 2383828 node_ready.go:38] duration metric: took 41.022536428s for node "addons-133262" to be "Ready" ...
	I0923 13:26:16.805159 2383828 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:26:16.913885 2383828 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r5mdg" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:16.951549 2383828 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 13:26:16.951577 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:16.952438 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:17.127438 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:17.160927 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:17.432503 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:17.433603 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:17.608480 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:17.630006 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:17.925104 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:17.926484 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:18.107966 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:18.129404 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:18.421302 2383828 pod_ready.go:93] pod "coredns-7c65d6cfc9-r5mdg" in "kube-system" namespace has status "Ready":"True"
	I0923 13:26:18.421379 2383828 pod_ready.go:82] duration metric: took 1.507456205s for pod "coredns-7c65d6cfc9-r5mdg" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.421409 2383828 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.425730 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:18.427429 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:18.429046 2383828 pod_ready.go:93] pod "etcd-addons-133262" in "kube-system" namespace has status "Ready":"True"
	I0923 13:26:18.429069 2383828 pod_ready.go:82] duration metric: took 7.651873ms for pod "etcd-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.429084 2383828 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.434109 2383828 pod_ready.go:93] pod "kube-apiserver-addons-133262" in "kube-system" namespace has status "Ready":"True"
	I0923 13:26:18.434138 2383828 pod_ready.go:82] duration metric: took 5.046437ms for pod "kube-apiserver-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.434150 2383828 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.439598 2383828 pod_ready.go:93] pod "kube-controller-manager-addons-133262" in "kube-system" namespace has status "Ready":"True"
	I0923 13:26:18.439681 2383828 pod_ready.go:82] duration metric: took 5.521536ms for pod "kube-controller-manager-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.439712 2383828 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qsbr8" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.448014 2383828 pod_ready.go:93] pod "kube-proxy-qsbr8" in "kube-system" namespace has status "Ready":"True"
	I0923 13:26:18.448041 2383828 pod_ready.go:82] duration metric: took 8.31315ms for pod "kube-proxy-qsbr8" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.448052 2383828 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.608120 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:18.629275 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:18.819692 2383828 pod_ready.go:93] pod "kube-scheduler-addons-133262" in "kube-system" namespace has status "Ready":"True"
	I0923 13:26:18.819716 2383828 pod_ready.go:82] duration metric: took 371.655421ms for pod "kube-scheduler-addons-133262" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.819728 2383828 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace to be "Ready" ...
	I0923 13:26:18.925018 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:18.926638 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:19.108614 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:19.129912 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:19.426498 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:19.434643 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:19.609140 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:19.630745 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:19.926844 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:19.927339 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:20.114093 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:20.130462 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:20.425302 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:20.425636 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:20.609266 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:20.630594 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:20.827914 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:20.927587 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:20.929214 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:21.108706 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:21.132032 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:21.424631 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:21.425874 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:21.609147 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:21.630433 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:21.925792 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:21.928179 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:22.108622 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:22.129868 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:22.427188 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:22.428634 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:22.609061 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:22.630978 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:22.927405 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:22.928806 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:23.107580 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:23.130630 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:23.335949 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:23.426013 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:23.427232 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:23.610362 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:23.631331 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:23.927726 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:23.929185 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:24.108527 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:24.130795 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:24.425075 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:24.426215 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:24.608374 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:24.629451 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:24.928086 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:24.931538 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:25.111794 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:25.131785 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:25.426856 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:25.427580 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:25.608708 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:25.630668 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:25.825870 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:25.928651 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:25.929663 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:26.108641 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:26.131563 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:26.427286 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:26.427960 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:26.608115 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:26.633744 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:26.926459 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:26.927726 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:27.109104 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:27.130705 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:27.427053 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:27.427397 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:27.624099 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:27.630018 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:27.828338 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:27.928155 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:27.929683 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:28.110349 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:28.143960 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:28.433172 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:28.435357 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:28.609573 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:28.630892 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:28.925681 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:28.926271 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:29.108330 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:29.129303 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:29.424904 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:29.425741 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:29.608510 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:29.710465 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:29.923972 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:29.925106 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:30.108770 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:30.130201 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:30.326027 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:30.426261 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:30.427582 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:30.608276 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:30.630718 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:30.924344 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:30.926833 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:31.108072 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:31.130159 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:31.427336 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:31.428549 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:31.608286 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:31.710749 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:31.924625 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:31.925735 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:32.108128 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:32.129672 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:32.424870 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:32.425427 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:32.608995 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:32.630396 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:32.826025 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:32.925488 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:32.927087 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:33.111899 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:33.131694 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:33.426016 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:33.427559 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:33.609572 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:33.630371 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:33.924532 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:33.925748 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:34.107968 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:34.129639 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:34.424332 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:34.425344 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:34.608778 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:34.630277 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:34.826657 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:34.925321 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:34.926268 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:35.108611 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:35.129999 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:35.424498 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:35.425426 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:35.608167 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:35.629677 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:35.938811 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:35.939969 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:36.109842 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:36.130915 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:36.424698 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:36.426376 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:36.612327 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:36.631843 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:36.827050 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:36.930622 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:36.932379 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:37.108391 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:37.130797 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:37.427855 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:37.429065 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:37.609000 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:37.631251 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:37.927282 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:37.928823 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:38.108882 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:38.130959 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:38.428793 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:38.430557 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:38.609116 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:38.631836 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:38.924409 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:38.924683 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:39.107807 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:39.130613 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:39.326392 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:39.424626 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:39.425794 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:39.607871 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:39.629703 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:39.925592 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:39.925659 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:40.107840 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:40.129321 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:40.425352 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:40.425960 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:40.614276 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:40.629941 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:40.925044 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:40.926229 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:41.108593 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:41.130189 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:41.426729 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:41.427709 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:41.608965 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:41.630359 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:41.826920 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:41.924926 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:41.925551 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:42.109578 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:42.133108 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:42.425407 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:42.427740 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:42.608654 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:42.630826 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:42.931020 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:42.937353 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:43.108583 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:43.132574 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:43.424400 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:43.425098 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:43.609963 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:43.629537 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:43.924682 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:43.926264 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:44.110084 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:44.130069 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:44.325695 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:44.424265 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:44.425922 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:44.608504 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:44.629877 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:44.924522 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:44.925695 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:45.110242 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:45.130936 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:45.424945 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:45.425411 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:45.608367 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:45.630543 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:45.925795 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:45.928543 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:46.109089 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:46.130246 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:46.326623 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:46.426401 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:46.427938 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:46.608698 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:46.631073 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:46.927536 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:46.929324 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:47.109064 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:47.130949 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:47.426667 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:47.427618 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:47.608470 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:47.630383 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:47.928101 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:47.929423 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:48.108252 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:48.131080 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:48.332183 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:48.425596 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:48.426896 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:48.610248 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:48.630206 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:48.925882 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:48.927170 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:49.108773 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:49.129674 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:49.433796 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:49.434273 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:49.608250 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:49.629496 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:49.924209 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:49.927394 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:50.112393 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:50.141056 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:50.426432 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:50.427848 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:50.609193 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:50.629564 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:50.826654 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:50.925277 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:50.925459 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:51.109502 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:51.129979 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:51.424931 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:51.426827 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:51.607779 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:51.630914 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:51.925515 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:51.926128 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:52.107821 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:52.129416 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:52.426982 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:52.428045 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:52.609048 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:52.635441 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:52.829448 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:52.927255 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:52.928637 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:53.114000 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:53.135575 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:53.425124 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:53.426490 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:53.608173 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:53.632288 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:53.924879 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:53.925839 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:54.108431 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:54.130064 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:54.423850 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:54.424803 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:54.608858 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:54.631272 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:54.925937 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:54.927386 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:55.114505 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:55.137604 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:55.336635 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:55.425715 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:55.427081 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:55.608839 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:55.632770 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:55.925063 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:55.925569 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:56.115411 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:56.131630 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:56.425028 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:56.426021 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:56.608664 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:56.629866 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:56.926440 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:56.926859 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:57.108467 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:57.130256 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:57.425565 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:57.426881 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:57.609766 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:57.631522 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:57.848276 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:26:57.925589 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:57.926613 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:58.108061 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:58.130231 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:58.428638 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:58.430028 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:58.610101 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:58.630423 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:58.939227 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:58.940370 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:59.108063 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:59.129831 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:59.424864 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:26:59.425049 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:59.608521 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:26:59.629400 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:26:59.924929 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:26:59.925607 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:00.109319 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:00.131385 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:00.326741 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:00.425134 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:00.425736 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:00.608187 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:00.630261 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:00.924611 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:00.925609 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:01.108482 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:01.131704 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:01.430029 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:01.434957 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:01.607779 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:01.630225 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:01.942371 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:01.943654 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:02.108432 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:02.130860 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:02.424998 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:02.426036 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:02.608703 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:02.631070 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:02.826705 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:02.940137 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:02.940713 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:03.108948 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:03.129909 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:03.425654 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:03.428546 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:03.608123 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:03.630220 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:03.929094 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:03.929953 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:04.108375 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:04.130124 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:04.425986 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:04.428220 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:04.609126 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:04.632554 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:04.828070 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:04.924862 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:04.926426 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:05.108702 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:05.130029 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:05.430290 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:05.432601 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:05.609965 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:05.629241 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:05.965785 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:05.986261 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 13:27:06.113906 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:06.221296 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:06.426093 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:06.426830 2383828 kapi.go:107] duration metric: took 1m28.007722418s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 13:27:06.609440 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:06.630984 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:06.828181 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:06.925007 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:07.108169 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:07.130553 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:07.429446 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:07.610515 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:07.631178 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:07.928566 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:08.153119 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:08.155484 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:08.425404 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:08.608582 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:08.631061 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:08.924414 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:09.108227 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:09.132719 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:09.326297 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:09.426358 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:09.608437 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:09.630725 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:09.925223 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:10.109249 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:10.132124 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:10.425940 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:10.608143 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:10.629578 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:10.938523 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:11.109170 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:11.130262 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:11.427987 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:11.610666 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:11.635149 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:11.825783 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:11.924894 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:12.110369 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:12.130346 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:12.424588 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:12.607769 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:12.629949 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:12.930645 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:13.108546 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:13.135919 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:13.426445 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:13.608884 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:13.630692 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:13.827763 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:13.925431 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:14.109183 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:14.129742 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:14.424960 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:14.608136 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:14.630105 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:14.924059 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:15.110266 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:15.130293 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:15.429250 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:15.609425 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:15.630519 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:15.925153 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:16.108867 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:16.130191 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:16.326153 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:16.424176 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:16.608289 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:16.629486 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 13:27:16.924711 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:17.108323 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:17.129255 2383828 kapi.go:107] duration metric: took 1m38.00448827s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 13:27:17.424604 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:17.607610 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:17.924643 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:18.108043 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:18.326219 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:18.424275 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:18.608499 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:18.924779 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:19.108343 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:19.424411 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:19.607534 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:19.925719 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:20.107995 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:20.326285 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:20.424926 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:20.608395 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:20.925436 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:21.108021 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:21.424136 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:21.608172 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:21.925823 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:22.109384 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:22.329093 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:22.425194 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:22.608312 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:22.924266 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:23.108430 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:23.425129 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:23.608158 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:23.925678 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:24.108712 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:24.424294 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:24.608749 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:24.830907 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:24.927893 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:25.115382 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:25.425227 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:25.608049 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:25.925661 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:26.108570 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:26.424563 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:26.608660 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:26.839697 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:26.926678 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:27.109456 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:27.427209 2383828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 13:27:27.608835 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:27.924668 2383828 kapi.go:107] duration metric: took 1m49.504828577s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 13:27:28.108170 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:28.608618 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:29.109389 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:29.328626 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:29.609997 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:30.109077 2383828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 13:27:30.609336 2383828 kapi.go:107] duration metric: took 1m47.004794044s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 13:27:30.611924 2383828 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-133262 cluster.
	I0923 13:27:30.614489 2383828 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 13:27:30.617196 2383828 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 13:27:30.620413 2383828 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0923 13:27:30.622924 2383828 addons.go:510] duration metric: took 1m58.634040955s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0923 13:27:31.825787 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:34.326055 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:36.326433 2383828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"False"
	I0923 13:27:36.827741 2383828 pod_ready.go:93] pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace has status "Ready":"True"
	I0923 13:27:36.827771 2383828 pod_ready.go:82] duration metric: took 1m18.008034234s for pod "metrics-server-84c5f94fbc-dqnhw" in "kube-system" namespace to be "Ready" ...
	I0923 13:27:36.827784 2383828 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4m26g" in "kube-system" namespace to be "Ready" ...
	I0923 13:27:36.834630 2383828 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-4m26g" in "kube-system" namespace has status "Ready":"True"
	I0923 13:27:36.834660 2383828 pod_ready.go:82] duration metric: took 6.867982ms for pod "nvidia-device-plugin-daemonset-4m26g" in "kube-system" namespace to be "Ready" ...
	I0923 13:27:36.834682 2383828 pod_ready.go:39] duration metric: took 1m20.029511263s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:27:36.834698 2383828 api_server.go:52] waiting for apiserver process to appear ...
	I0923 13:27:36.834732 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 13:27:36.834794 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 13:27:36.888124 2383828 cri.go:89] found id: "9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23"
	I0923 13:27:36.888148 2383828 cri.go:89] found id: ""
	I0923 13:27:36.888156 2383828 logs.go:276] 1 containers: [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23]
	I0923 13:27:36.888219 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:36.893253 2383828 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 13:27:36.893387 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 13:27:36.933867 2383828 cri.go:89] found id: "227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7"
	I0923 13:27:36.933890 2383828 cri.go:89] found id: ""
	I0923 13:27:36.933898 2383828 logs.go:276] 1 containers: [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7]
	I0923 13:27:36.933953 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:36.937393 2383828 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 13:27:36.937521 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 13:27:36.975388 2383828 cri.go:89] found id: "62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6"
	I0923 13:27:36.975410 2383828 cri.go:89] found id: ""
	I0923 13:27:36.975418 2383828 logs.go:276] 1 containers: [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6]
	I0923 13:27:36.975488 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:36.978917 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 13:27:36.978992 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 13:27:37.026940 2383828 cri.go:89] found id: "1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09"
	I0923 13:27:37.026968 2383828 cri.go:89] found id: ""
	I0923 13:27:37.026976 2383828 logs.go:276] 1 containers: [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09]
	I0923 13:27:37.027036 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:37.031174 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 13:27:37.031273 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 13:27:37.088807 2383828 cri.go:89] found id: "6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d"
	I0923 13:27:37.088831 2383828 cri.go:89] found id: ""
	I0923 13:27:37.088838 2383828 logs.go:276] 1 containers: [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d]
	I0923 13:27:37.088896 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:37.092489 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 13:27:37.092589 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 13:27:37.130778 2383828 cri.go:89] found id: "3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a"
	I0923 13:27:37.130803 2383828 cri.go:89] found id: ""
	I0923 13:27:37.130810 2383828 logs.go:276] 1 containers: [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a]
	I0923 13:27:37.130892 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:37.134501 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 13:27:37.134578 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 13:27:37.173172 2383828 cri.go:89] found id: "de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78"
	I0923 13:27:37.173194 2383828 cri.go:89] found id: ""
	I0923 13:27:37.173202 2383828 logs.go:276] 1 containers: [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78]
	I0923 13:27:37.173269 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:37.177038 2383828 logs.go:123] Gathering logs for dmesg ...
	I0923 13:27:37.177064 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 13:27:37.199500 2383828 logs.go:123] Gathering logs for kube-apiserver [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23] ...
	I0923 13:27:37.199538 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23"
	I0923 13:27:37.265609 2383828 logs.go:123] Gathering logs for coredns [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6] ...
	I0923 13:27:37.265654 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6"
	I0923 13:27:37.308188 2383828 logs.go:123] Gathering logs for kube-scheduler [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09] ...
	I0923 13:27:37.308222 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09"
	I0923 13:27:37.364448 2383828 logs.go:123] Gathering logs for kube-proxy [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d] ...
	I0923 13:27:37.364484 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d"
	I0923 13:27:37.407944 2383828 logs.go:123] Gathering logs for CRI-O ...
	I0923 13:27:37.407976 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 13:27:37.503765 2383828 logs.go:123] Gathering logs for kubelet ...
	I0923 13:27:37.503806 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 13:27:37.536529 2383828 logs.go:138] Found kubelet problem: Sep 23 13:26:16 addons-133262 kubelet[1502]: W0923 13:26:16.549217    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-133262" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-133262' and this object
	W0923 13:27:37.536775 2383828 logs.go:138] Found kubelet problem: Sep 23 13:26:16 addons-133262 kubelet[1502]: E0923 13:26:16.549260    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-133262\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-133262' and this object" logger="UnhandledError"
	I0923 13:27:37.596083 2383828 logs.go:123] Gathering logs for describe nodes ...
	I0923 13:27:37.596124 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 13:27:37.773537 2383828 logs.go:123] Gathering logs for etcd [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7] ...
	I0923 13:27:37.773566 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7"
	I0923 13:27:37.829819 2383828 logs.go:123] Gathering logs for kube-controller-manager [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a] ...
	I0923 13:27:37.829851 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a"
	I0923 13:27:37.903553 2383828 logs.go:123] Gathering logs for kindnet [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78] ...
	I0923 13:27:37.903589 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78"
	I0923 13:27:37.949912 2383828 logs.go:123] Gathering logs for container status ...
	I0923 13:27:37.949945 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 13:27:38.018475 2383828 out.go:358] Setting ErrFile to fd 2...
	I0923 13:27:38.018552 2383828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 13:27:38.018621 2383828 out.go:270] X Problems detected in kubelet:
	W0923 13:27:38.018634 2383828 out.go:270]   Sep 23 13:26:16 addons-133262 kubelet[1502]: W0923 13:26:16.549217    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-133262" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-133262' and this object
	W0923 13:27:38.018644 2383828 out.go:270]   Sep 23 13:26:16 addons-133262 kubelet[1502]: E0923 13:26:16.549260    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-133262\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-133262' and this object" logger="UnhandledError"
	I0923 13:27:38.018658 2383828 out.go:358] Setting ErrFile to fd 2...
	I0923 13:27:38.018665 2383828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:27:48.019881 2383828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:27:48.035316 2383828 api_server.go:72] duration metric: took 2m16.046841632s to wait for apiserver process to appear ...
	I0923 13:27:48.035344 2383828 api_server.go:88] waiting for apiserver healthz status ...
	I0923 13:27:48.035384 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 13:27:48.035446 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 13:27:48.085240 2383828 cri.go:89] found id: "9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23"
	I0923 13:27:48.085263 2383828 cri.go:89] found id: ""
	I0923 13:27:48.085271 2383828 logs.go:276] 1 containers: [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23]
	I0923 13:27:48.085332 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.089041 2383828 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 13:27:48.089114 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 13:27:48.127126 2383828 cri.go:89] found id: "227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7"
	I0923 13:27:48.127146 2383828 cri.go:89] found id: ""
	I0923 13:27:48.127154 2383828 logs.go:276] 1 containers: [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7]
	I0923 13:27:48.127220 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.130855 2383828 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 13:27:48.130931 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 13:27:48.169933 2383828 cri.go:89] found id: "62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6"
	I0923 13:27:48.169956 2383828 cri.go:89] found id: ""
	I0923 13:27:48.169964 2383828 logs.go:276] 1 containers: [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6]
	I0923 13:27:48.170017 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.173593 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 13:27:48.173666 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 13:27:48.217851 2383828 cri.go:89] found id: "1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09"
	I0923 13:27:48.217875 2383828 cri.go:89] found id: ""
	I0923 13:27:48.217920 2383828 logs.go:276] 1 containers: [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09]
	I0923 13:27:48.217983 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.221539 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 13:27:48.221608 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 13:27:48.260958 2383828 cri.go:89] found id: "6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d"
	I0923 13:27:48.260982 2383828 cri.go:89] found id: ""
	I0923 13:27:48.260990 2383828 logs.go:276] 1 containers: [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d]
	I0923 13:27:48.261047 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.264814 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 13:27:48.264887 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 13:27:48.303207 2383828 cri.go:89] found id: "3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a"
	I0923 13:27:48.303227 2383828 cri.go:89] found id: ""
	I0923 13:27:48.303234 2383828 logs.go:276] 1 containers: [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a]
	I0923 13:27:48.303290 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.307190 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 13:27:48.307311 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 13:27:48.345328 2383828 cri.go:89] found id: "de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78"
	I0923 13:27:48.345353 2383828 cri.go:89] found id: ""
	I0923 13:27:48.345361 2383828 logs.go:276] 1 containers: [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78]
	I0923 13:27:48.345415 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:48.349052 2383828 logs.go:123] Gathering logs for CRI-O ...
	I0923 13:27:48.349077 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 13:27:48.440481 2383828 logs.go:123] Gathering logs for kubelet ...
	I0923 13:27:48.440519 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 13:27:48.471627 2383828 logs.go:138] Found kubelet problem: Sep 23 13:26:16 addons-133262 kubelet[1502]: W0923 13:26:16.549217    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-133262" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-133262' and this object
	W0923 13:27:48.471961 2383828 logs.go:138] Found kubelet problem: Sep 23 13:26:16 addons-133262 kubelet[1502]: E0923 13:26:16.549260    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-133262\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-133262' and this object" logger="UnhandledError"
	I0923 13:27:48.532975 2383828 logs.go:123] Gathering logs for describe nodes ...
	I0923 13:27:48.533015 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 13:27:48.676516 2383828 logs.go:123] Gathering logs for kube-apiserver [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23] ...
	I0923 13:27:48.676551 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23"
	I0923 13:27:48.743456 2383828 logs.go:123] Gathering logs for etcd [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7] ...
	I0923 13:27:48.743491 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7"
	I0923 13:27:48.801610 2383828 logs.go:123] Gathering logs for coredns [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6] ...
	I0923 13:27:48.801645 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6"
	I0923 13:27:48.844944 2383828 logs.go:123] Gathering logs for kube-scheduler [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09] ...
	I0923 13:27:48.844975 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09"
	I0923 13:27:48.892863 2383828 logs.go:123] Gathering logs for kube-controller-manager [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a] ...
	I0923 13:27:48.892898 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a"
	I0923 13:27:48.965213 2383828 logs.go:123] Gathering logs for dmesg ...
	I0923 13:27:48.965246 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 13:27:48.982076 2383828 logs.go:123] Gathering logs for kube-proxy [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d] ...
	I0923 13:27:48.982107 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d"
	I0923 13:27:49.032446 2383828 logs.go:123] Gathering logs for kindnet [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78] ...
	I0923 13:27:49.032476 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78"
	I0923 13:27:49.081688 2383828 logs.go:123] Gathering logs for container status ...
	I0923 13:27:49.081717 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 13:27:49.140973 2383828 out.go:358] Setting ErrFile to fd 2...
	I0923 13:27:49.141006 2383828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 13:27:49.141069 2383828 out.go:270] X Problems detected in kubelet:
	W0923 13:27:49.141087 2383828 out.go:270]   Sep 23 13:26:16 addons-133262 kubelet[1502]: W0923 13:26:16.549217    1502 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-133262" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-133262' and this object
	W0923 13:27:49.141102 2383828 out.go:270]   Sep 23 13:26:16 addons-133262 kubelet[1502]: E0923 13:26:16.549260    1502 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-133262\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-133262' and this object" logger="UnhandledError"
	I0923 13:27:49.141110 2383828 out.go:358] Setting ErrFile to fd 2...
	I0923 13:27:49.141123 2383828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:27:59.141822 2383828 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:27:59.149585 2383828 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0923 13:27:59.150578 2383828 api_server.go:141] control plane version: v1.31.1
	I0923 13:27:59.150608 2383828 api_server.go:131] duration metric: took 11.115252928s to wait for apiserver health ...
	I0923 13:27:59.150617 2383828 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 13:27:59.150645 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 13:27:59.150719 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 13:27:59.197911 2383828 cri.go:89] found id: "9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23"
	I0923 13:27:59.197932 2383828 cri.go:89] found id: ""
	I0923 13:27:59.197941 2383828 logs.go:276] 1 containers: [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23]
	I0923 13:27:59.197995 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.201940 2383828 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 13:27:59.202006 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 13:27:59.238531 2383828 cri.go:89] found id: "227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7"
	I0923 13:27:59.238551 2383828 cri.go:89] found id: ""
	I0923 13:27:59.238559 2383828 logs.go:276] 1 containers: [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7]
	I0923 13:27:59.238611 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.242085 2383828 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 13:27:59.242204 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 13:27:59.280989 2383828 cri.go:89] found id: "62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6"
	I0923 13:27:59.281010 2383828 cri.go:89] found id: ""
	I0923 13:27:59.281017 2383828 logs.go:276] 1 containers: [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6]
	I0923 13:27:59.281074 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.284557 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 13:27:59.284637 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 13:27:59.324082 2383828 cri.go:89] found id: "1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09"
	I0923 13:27:59.324103 2383828 cri.go:89] found id: ""
	I0923 13:27:59.324111 2383828 logs.go:276] 1 containers: [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09]
	I0923 13:27:59.324165 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.327636 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 13:27:59.327740 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 13:27:59.365535 2383828 cri.go:89] found id: "6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d"
	I0923 13:27:59.365562 2383828 cri.go:89] found id: ""
	I0923 13:27:59.365572 2383828 logs.go:276] 1 containers: [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d]
	I0923 13:27:59.365643 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.369260 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 13:27:59.369333 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 13:27:59.406889 2383828 cri.go:89] found id: "3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a"
	I0923 13:27:59.406956 2383828 cri.go:89] found id: ""
	I0923 13:27:59.406971 2383828 logs.go:276] 1 containers: [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a]
	I0923 13:27:59.407044 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.410404 2383828 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 13:27:59.410504 2383828 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 13:27:59.464101 2383828 cri.go:89] found id: "de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78"
	I0923 13:27:59.464123 2383828 cri.go:89] found id: ""
	I0923 13:27:59.464130 2383828 logs.go:276] 1 containers: [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78]
	I0923 13:27:59.464210 2383828 ssh_runner.go:195] Run: which crictl
	I0923 13:27:59.467715 2383828 logs.go:123] Gathering logs for dmesg ...
	I0923 13:27:59.467741 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 13:27:59.484127 2383828 logs.go:123] Gathering logs for kube-apiserver [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23] ...
	I0923 13:27:59.484159 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23"
	I0923 13:27:59.535894 2383828 logs.go:123] Gathering logs for kube-proxy [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d] ...
	I0923 13:27:59.535971 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d"
	I0923 13:27:59.581931 2383828 logs.go:123] Gathering logs for container status ...
	I0923 13:27:59.581956 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 13:27:59.630190 2383828 logs.go:123] Gathering logs for kube-controller-manager [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a] ...
	I0923 13:27:59.630220 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a"
	I0923 13:27:59.697374 2383828 logs.go:123] Gathering logs for kindnet [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78] ...
	I0923 13:27:59.697409 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78"
	I0923 13:27:59.735991 2383828 logs.go:123] Gathering logs for CRI-O ...
	I0923 13:27:59.736021 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 13:27:59.826571 2383828 logs.go:123] Gathering logs for kubelet ...
	I0923 13:27:59.826656 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 13:27:59.899998 2383828 logs.go:123] Gathering logs for describe nodes ...
	I0923 13:27:59.900035 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 13:28:00.099569 2383828 logs.go:123] Gathering logs for etcd [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7] ...
	I0923 13:28:00.099607 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7"
	I0923 13:28:00.174513 2383828 logs.go:123] Gathering logs for coredns [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6] ...
	I0923 13:28:00.174556 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6"
	I0923 13:28:00.241997 2383828 logs.go:123] Gathering logs for kube-scheduler [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09] ...
	I0923 13:28:00.242034 2383828 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09"
	I0923 13:28:02.811205 2383828 system_pods.go:59] 18 kube-system pods found
	I0923 13:28:02.811254 2383828 system_pods.go:61] "coredns-7c65d6cfc9-r5mdg" [244c7077-c0d1-4d2d-92f7-49811a2e7840] Running
	I0923 13:28:02.811262 2383828 system_pods.go:61] "csi-hostpath-attacher-0" [2dfdc637-b058-47a4-8127-066e22a8c844] Running
	I0923 13:28:02.811268 2383828 system_pods.go:61] "csi-hostpath-resizer-0" [bf94dfec-f4ec-4276-8c84-e9d52b353dd1] Running
	I0923 13:28:02.811273 2383828 system_pods.go:61] "csi-hostpathplugin-4l5sb" [4b14671b-9a65-4b4f-9656-1a542720db35] Running
	I0923 13:28:02.811278 2383828 system_pods.go:61] "etcd-addons-133262" [ccd2243d-7923-4bd5-aad1-4bcdf84093b0] Running
	I0923 13:28:02.811282 2383828 system_pods.go:61] "kindnet-j682f" [30af3434-889d-4dfc-933a-a18b65eae56b] Running
	I0923 13:28:02.811286 2383828 system_pods.go:61] "kube-apiserver-addons-133262" [a07b8088-fb80-4c58-9f12-a59ce48acae6] Running
	I0923 13:28:02.811290 2383828 system_pods.go:61] "kube-controller-manager-addons-133262" [402fc2e9-9278-4d3c-ba42-58cf9e6f7256] Running
	I0923 13:28:02.811295 2383828 system_pods.go:61] "kube-ingress-dns-minikube" [f3f96ece-39b2-4aef-afc3-deeac0208c34] Running
	I0923 13:28:02.811299 2383828 system_pods.go:61] "kube-proxy-qsbr8" [352eb868-c25d-49b6-9c55-9960dc2cdf8e] Running
	I0923 13:28:02.811303 2383828 system_pods.go:61] "kube-scheduler-addons-133262" [a1b18f24-3925-4dbd-adbf-b70661d68d91] Running
	I0923 13:28:02.811307 2383828 system_pods.go:61] "metrics-server-84c5f94fbc-dqnhw" [6d7335f6-5dfb-4227-9606-8d8b1b126d40] Running
	I0923 13:28:02.811321 2383828 system_pods.go:61] "nvidia-device-plugin-daemonset-4m26g" [c0e73bf1-5273-4a14-9517-202ce22276b8] Running
	I0923 13:28:02.811325 2383828 system_pods.go:61] "registry-66c9cd494c-2g5d2" [d093e650-6688-49f8-9c46-28a49dd5a974] Running
	I0923 13:28:02.811328 2383828 system_pods.go:61] "registry-proxy-pqtjc" [cb6ceb80-6e9e-4cb0-8229-2ffe7f03b5f8] Running
	I0923 13:28:02.811339 2383828 system_pods.go:61] "snapshot-controller-56fcc65765-5t68w" [15a9f6f7-dd61-455c-be65-26312ab5fa53] Running
	I0923 13:28:02.811343 2383828 system_pods.go:61] "snapshot-controller-56fcc65765-mjwxw" [8d203518-0a49-462e-b208-58bf3d4f9059] Running
	I0923 13:28:02.811346 2383828 system_pods.go:61] "storage-provisioner" [c54ff386-7dac-4422-9ce3-010b14a0da61] Running
	I0923 13:28:02.811353 2383828 system_pods.go:74] duration metric: took 3.660729215s to wait for pod list to return data ...
	I0923 13:28:02.811364 2383828 default_sa.go:34] waiting for default service account to be created ...
	I0923 13:28:02.814522 2383828 default_sa.go:45] found service account: "default"
	I0923 13:28:02.814550 2383828 default_sa.go:55] duration metric: took 3.179207ms for default service account to be created ...
	I0923 13:28:02.814561 2383828 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 13:28:02.824546 2383828 system_pods.go:86] 18 kube-system pods found
	I0923 13:28:02.824586 2383828 system_pods.go:89] "coredns-7c65d6cfc9-r5mdg" [244c7077-c0d1-4d2d-92f7-49811a2e7840] Running
	I0923 13:28:02.824595 2383828 system_pods.go:89] "csi-hostpath-attacher-0" [2dfdc637-b058-47a4-8127-066e22a8c844] Running
	I0923 13:28:02.824600 2383828 system_pods.go:89] "csi-hostpath-resizer-0" [bf94dfec-f4ec-4276-8c84-e9d52b353dd1] Running
	I0923 13:28:02.824627 2383828 system_pods.go:89] "csi-hostpathplugin-4l5sb" [4b14671b-9a65-4b4f-9656-1a542720db35] Running
	I0923 13:28:02.824639 2383828 system_pods.go:89] "etcd-addons-133262" [ccd2243d-7923-4bd5-aad1-4bcdf84093b0] Running
	I0923 13:28:02.824644 2383828 system_pods.go:89] "kindnet-j682f" [30af3434-889d-4dfc-933a-a18b65eae56b] Running
	I0923 13:28:02.824650 2383828 system_pods.go:89] "kube-apiserver-addons-133262" [a07b8088-fb80-4c58-9f12-a59ce48acae6] Running
	I0923 13:28:02.824661 2383828 system_pods.go:89] "kube-controller-manager-addons-133262" [402fc2e9-9278-4d3c-ba42-58cf9e6f7256] Running
	I0923 13:28:02.824666 2383828 system_pods.go:89] "kube-ingress-dns-minikube" [f3f96ece-39b2-4aef-afc3-deeac0208c34] Running
	I0923 13:28:02.824670 2383828 system_pods.go:89] "kube-proxy-qsbr8" [352eb868-c25d-49b6-9c55-9960dc2cdf8e] Running
	I0923 13:28:02.824680 2383828 system_pods.go:89] "kube-scheduler-addons-133262" [a1b18f24-3925-4dbd-adbf-b70661d68d91] Running
	I0923 13:28:02.824685 2383828 system_pods.go:89] "metrics-server-84c5f94fbc-dqnhw" [6d7335f6-5dfb-4227-9606-8d8b1b126d40] Running
	I0923 13:28:02.824707 2383828 system_pods.go:89] "nvidia-device-plugin-daemonset-4m26g" [c0e73bf1-5273-4a14-9517-202ce22276b8] Running
	I0923 13:28:02.824719 2383828 system_pods.go:89] "registry-66c9cd494c-2g5d2" [d093e650-6688-49f8-9c46-28a49dd5a974] Running
	I0923 13:28:02.824724 2383828 system_pods.go:89] "registry-proxy-pqtjc" [cb6ceb80-6e9e-4cb0-8229-2ffe7f03b5f8] Running
	I0923 13:28:02.824744 2383828 system_pods.go:89] "snapshot-controller-56fcc65765-5t68w" [15a9f6f7-dd61-455c-be65-26312ab5fa53] Running
	I0923 13:28:02.824749 2383828 system_pods.go:89] "snapshot-controller-56fcc65765-mjwxw" [8d203518-0a49-462e-b208-58bf3d4f9059] Running
	I0923 13:28:02.824755 2383828 system_pods.go:89] "storage-provisioner" [c54ff386-7dac-4422-9ce3-010b14a0da61] Running
	I0923 13:28:02.824763 2383828 system_pods.go:126] duration metric: took 10.19587ms to wait for k8s-apps to be running ...
	I0923 13:28:02.824776 2383828 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 13:28:02.824845 2383828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:28:02.836586 2383828 system_svc.go:56] duration metric: took 11.795464ms WaitForService to wait for kubelet
	I0923 13:28:02.836625 2383828 kubeadm.go:582] duration metric: took 2m30.848156578s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:28:02.836643 2383828 node_conditions.go:102] verifying NodePressure condition ...
	I0923 13:28:02.840270 2383828 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0923 13:28:02.840307 2383828 node_conditions.go:123] node cpu capacity is 2
	I0923 13:28:02.840319 2383828 node_conditions.go:105] duration metric: took 3.655882ms to run NodePressure ...
	I0923 13:28:02.840330 2383828 start.go:241] waiting for startup goroutines ...
	I0923 13:28:02.840338 2383828 start.go:246] waiting for cluster config update ...
	I0923 13:28:02.840354 2383828 start.go:255] writing updated cluster config ...
	I0923 13:28:02.840649 2383828 ssh_runner.go:195] Run: rm -f paused
	I0923 13:28:03.209187 2383828 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 13:28:03.213065 2383828 out.go:177] * Done! kubectl is now configured to use "addons-133262" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 13:41:27 addons-133262 crio[966]: time="2024-09-23 13:41:27.434688705Z" level=info msg="Removed pod sandbox: 53cf4c8305e8c90c562a088e2f2a6c041631e0f098f9ced65152c38d638c955a" id=203fe49a-5501-44ee-9bf5-a8046efef353 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 23 13:41:27 addons-133262 crio[966]: time="2024-09-23 13:41:27.435197730Z" level=info msg="Stopping pod sandbox: 13108374025b04fbea6758d791df5c0d551537f78da88d3cee57590375049a0a" id=0e4f26ad-3e4f-4de3-81eb-b0c8b8087cd1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 23 13:41:27 addons-133262 crio[966]: time="2024-09-23 13:41:27.435232412Z" level=info msg="Stopped pod sandbox (already stopped): 13108374025b04fbea6758d791df5c0d551537f78da88d3cee57590375049a0a" id=0e4f26ad-3e4f-4de3-81eb-b0c8b8087cd1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 23 13:41:27 addons-133262 crio[966]: time="2024-09-23 13:41:27.435697336Z" level=info msg="Removing pod sandbox: 13108374025b04fbea6758d791df5c0d551537f78da88d3cee57590375049a0a" id=b395de86-d080-4b74-9fe4-87ed72ea7f51 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 23 13:41:27 addons-133262 crio[966]: time="2024-09-23 13:41:27.445581463Z" level=info msg="Removed pod sandbox: 13108374025b04fbea6758d791df5c0d551537f78da88d3cee57590375049a0a" id=b395de86-d080-4b74-9fe4-87ed72ea7f51 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 23 13:41:36 addons-133262 crio[966]: time="2024-09-23 13:41:36.760986067Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b8748e82-ba74-4779-8f57-7f8c8652ba4e name=/runtime.v1.ImageService/ImageStatus
	Sep 23 13:41:36 addons-133262 crio[966]: time="2024-09-23 13:41:36.761220383Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b8748e82-ba74-4779-8f57-7f8c8652ba4e name=/runtime.v1.ImageService/ImageStatus
	Sep 23 13:41:47 addons-133262 crio[966]: time="2024-09-23 13:41:47.760457868Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f0595e21-9f0b-4668-9cdf-5e59a6cfd2ad name=/runtime.v1.ImageService/ImageStatus
	Sep 23 13:41:47 addons-133262 crio[966]: time="2024-09-23 13:41:47.760694046Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f0595e21-9f0b-4668-9cdf-5e59a6cfd2ad name=/runtime.v1.ImageService/ImageStatus
	Sep 23 13:41:59 addons-133262 crio[966]: time="2024-09-23 13:41:59.760575499Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2ac9d6a7-337a-4796-8904-b032c59ab791 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 13:41:59 addons-133262 crio[966]: time="2024-09-23 13:41:59.760816133Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2ac9d6a7-337a-4796-8904-b032c59ab791 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 13:42:14 addons-133262 crio[966]: time="2024-09-23 13:42:14.760721885Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7fd6bf56-27ea-4ae6-af1c-b19176a58624 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 13:42:14 addons-133262 crio[966]: time="2024-09-23 13:42:14.760954954Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7fd6bf56-27ea-4ae6-af1c-b19176a58624 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 13:42:29 addons-133262 crio[966]: time="2024-09-23 13:42:29.759704200Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8e794385-90f6-4ae5-9775-5c95ea431a89 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 13:42:29 addons-133262 crio[966]: time="2024-09-23 13:42:29.759925807Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=8e794385-90f6-4ae5-9775-5c95ea431a89 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 13:42:43 addons-133262 crio[966]: time="2024-09-23 13:42:43.760530927Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ee8fe443-9747-4dd9-9fb6-38f41fd4690c name=/runtime.v1.ImageService/ImageStatus
	Sep 23 13:42:43 addons-133262 crio[966]: time="2024-09-23 13:42:43.760768870Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ee8fe443-9747-4dd9-9fb6-38f41fd4690c name=/runtime.v1.ImageService/ImageStatus
	Sep 23 13:42:55 addons-133262 crio[966]: time="2024-09-23 13:42:55.760949478Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4dc5d16d-9add-4392-8117-accf8728e412 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 13:42:55 addons-133262 crio[966]: time="2024-09-23 13:42:55.761174465Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4dc5d16d-9add-4392-8117-accf8728e412 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 13:43:05 addons-133262 crio[966]: time="2024-09-23 13:43:05.724261082Z" level=info msg="Stopping container: b0b2fe538d362d53a0945b86d3223bb97dae4ff799e2592cd5d7f2a4ef813b39 (timeout: 30s)" id=441a2b6e-7321-4b61-b7e5-20ec9c34caed name=/runtime.v1.RuntimeService/StopContainer
	Sep 23 13:43:06 addons-133262 crio[966]: time="2024-09-23 13:43:06.886491460Z" level=info msg="Stopped container b0b2fe538d362d53a0945b86d3223bb97dae4ff799e2592cd5d7f2a4ef813b39: kube-system/metrics-server-84c5f94fbc-dqnhw/metrics-server" id=441a2b6e-7321-4b61-b7e5-20ec9c34caed name=/runtime.v1.RuntimeService/StopContainer
	Sep 23 13:43:06 addons-133262 crio[966]: time="2024-09-23 13:43:06.887043856Z" level=info msg="Stopping pod sandbox: dbcdb7b69735cd0939e586a138e7e158e13329bf4c2f8387ceea9da00e7244bc" id=b81c840f-cf56-4d0a-baae-3408fb918c30 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 23 13:43:06 addons-133262 crio[966]: time="2024-09-23 13:43:06.887280051Z" level=info msg="Got pod network &{Name:metrics-server-84c5f94fbc-dqnhw Namespace:kube-system ID:dbcdb7b69735cd0939e586a138e7e158e13329bf4c2f8387ceea9da00e7244bc UID:6d7335f6-5dfb-4227-9606-8d8b1b126d40 NetNS:/var/run/netns/bb03e52e-eafe-4500-80b9-f748dcbbbd61 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 23 13:43:06 addons-133262 crio[966]: time="2024-09-23 13:43:06.887420635Z" level=info msg="Deleting pod kube-system_metrics-server-84c5f94fbc-dqnhw from CNI network \"kindnet\" (type=ptp)"
	Sep 23 13:43:06 addons-133262 crio[966]: time="2024-09-23 13:43:06.917465217Z" level=info msg="Stopped pod sandbox: dbcdb7b69735cd0939e586a138e7e158e13329bf4c2f8387ceea9da00e7244bc" id=b81c840f-cf56-4d0a-baae-3408fb918c30 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4ea829001ea4f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   7c0ffc4aafe47       hello-world-app-55bf9c44b4-zvnjf
	1ce6fef620d09       docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53                         4 minutes ago       Running             nginx                     0                   e467bc93e7432       nginx
	334680bd78e33       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69            15 minutes ago      Running             gcp-auth                  0                   2c1d4aa6e8775       gcp-auth-89d5ffd79-sn4tn
	b0b2fe538d362       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   16 minutes ago      Exited              metrics-server            0                   dbcdb7b69735c       metrics-server-84c5f94fbc-dqnhw
	846b4d1bcfbe3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        16 minutes ago      Running             storage-provisioner       0                   a4e85889dbd73       storage-provisioner
	62d73ade94f57       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                        16 minutes ago      Running             coredns                   0                   ccac108e74df4       coredns-7c65d6cfc9-r5mdg
	6e1da3a73993a       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                        17 minutes ago      Running             kube-proxy                0                   3929648a8d7f9       kube-proxy-qsbr8
	de10c80270b5c       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                        17 minutes ago      Running             kindnet-cni               0                   107beb5e7b8ce       kindnet-j682f
	1ef3f97eb6473       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                        17 minutes ago      Running             kube-scheduler            0                   9b8411a580ef2       kube-scheduler-addons-133262
	3cf91c4e890ab       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                        17 minutes ago      Running             kube-controller-manager   0                   ed11482c3169e       kube-controller-manager-addons-133262
	9a2762b26053f       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                        17 minutes ago      Running             kube-apiserver            0                   02dbc597f6b2f       kube-apiserver-addons-133262
	227c9772e72a3       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        17 minutes ago      Running             etcd                      0                   7c44e58ec4ddc       etcd-addons-133262
	
	
	==> coredns [62d73ade94f578fe499e548df92a03d57a22853697ce4eca13495f2b4a5437b6] <==
	[INFO] 10.244.0.15:58839 - 14543 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000098385s
	[INFO] 10.244.0.15:53549 - 55590 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002790769s
	[INFO] 10.244.0.15:53549 - 53051 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003201599s
	[INFO] 10.244.0.15:57616 - 17518 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0004867s
	[INFO] 10.244.0.15:57616 - 5395 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000831857s
	[INFO] 10.244.0.15:45938 - 8747 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000161046s
	[INFO] 10.244.0.15:45938 - 40758 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000200742s
	[INFO] 10.244.0.15:35197 - 55448 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000055129s
	[INFO] 10.244.0.15:35197 - 11418 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000055506s
	[INFO] 10.244.0.15:55894 - 47736 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100871s
	[INFO] 10.244.0.15:55894 - 56694 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00011304s
	[INFO] 10.244.0.15:44812 - 41796 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001561136s
	[INFO] 10.244.0.15:44812 - 9538 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00191687s
	[INFO] 10.244.0.15:49269 - 61781 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000081385s
	[INFO] 10.244.0.15:49269 - 20566 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000043453s
	[INFO] 10.244.0.20:57660 - 31419 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000212123s
	[INFO] 10.244.0.20:32983 - 51792 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000108314s
	[INFO] 10.244.0.20:49419 - 11345 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00013397s
	[INFO] 10.244.0.20:59959 - 61304 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001039721s
	[INFO] 10.244.0.20:40904 - 968 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127275s
	[INFO] 10.244.0.20:60236 - 53744 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132911s
	[INFO] 10.244.0.20:44058 - 55419 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002363448s
	[INFO] 10.244.0.20:45850 - 62938 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002112385s
	[INFO] 10.244.0.20:37367 - 36922 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001718064s
	[INFO] 10.244.0.20:53861 - 52609 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.002198873s
	
	
	==> describe nodes <==
	Name:               addons-133262
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-133262
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=addons-133262
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T13_25_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-133262
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:25:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-133262
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:43:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:41:37 +0000   Mon, 23 Sep 2024 13:25:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:41:37 +0000   Mon, 23 Sep 2024 13:25:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:41:37 +0000   Mon, 23 Sep 2024 13:25:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:41:37 +0000   Mon, 23 Sep 2024 13:26:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-133262
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 956a9a3790d546e98f478aa431b93546
	  System UUID:                87adfa53-2e43-424b-9596-ae2d9c13f82d
	  Boot ID:                    97839423-83c8-4f76-b1f5-7b978ef1271e
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     hello-world-app-55bf9c44b4-zvnjf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  gcp-auth                    gcp-auth-89d5ffd79-sn4tn                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-7c65d6cfc9-r5mdg                 100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-addons-133262                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-j682f                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-addons-133262             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-addons-133262    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-qsbr8                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-addons-133262             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 17m   kube-proxy       
	  Normal   Starting                 17m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  17m   kubelet          Node addons-133262 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m   kubelet          Node addons-133262 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m   kubelet          Node addons-133262 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m   node-controller  Node addons-133262 event: Registered Node addons-133262 in Controller
	  Normal   NodeReady                16m   kubelet          Node addons-133262 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep23 13:41] hrtimer: interrupt took 2926293 ns
	
	
	==> etcd [227c9772e72a3fdf37beef09351c0f33183b31a12c4f4c7f337fb2712a87bec7] <==
	{"level":"info","ts":"2024-09-23T13:25:21.710353Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:25:21.710990Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:25:21.711900Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T13:25:21.739279Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2024-09-23T13:25:34.842632Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.135341ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-09-23T13:25:34.842748Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.290389ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T13:25:34.842768Z","caller":"traceutil/trace.go:171","msg":"trace[2058929202] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:340; }","duration":"101.314314ms","start":"2024-09-23T13:25:34.741449Z","end":"2024-09-23T13:25:34.842763Z","steps":["trace[2058929202] 'range keys from in-memory index tree'  (duration: 100.503503ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:25:34.842723Z","caller":"traceutil/trace.go:171","msg":"trace[446664898] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:340; }","duration":"101.242874ms","start":"2024-09-23T13:25:34.741467Z","end":"2024-09-23T13:25:34.842710Z","steps":["trace[446664898] 'range keys from in-memory index tree'  (duration: 100.54671ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:25:35.278741Z","caller":"traceutil/trace.go:171","msg":"trace[254033020] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"109.264528ms","start":"2024-09-23T13:25:35.169436Z","end":"2024-09-23T13:25:35.278701Z","steps":["trace[254033020] 'process raft request'  (duration: 25.693032ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:25:35.543061Z","caller":"traceutil/trace.go:171","msg":"trace[1953560179] linearizableReadLoop","detail":"{readStateIndex:358; appliedIndex:358; }","duration":"246.516564ms","start":"2024-09-23T13:25:35.296531Z","end":"2024-09-23T13:25:35.543048Z","steps":["trace[1953560179] 'read index received'  (duration: 246.511993ms)","trace[1953560179] 'applied index is now lower than readState.Index'  (duration: 3.75µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T13:25:35.546513Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"249.962136ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T13:25:35.549918Z","caller":"traceutil/trace.go:171","msg":"trace[1294112056] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:346; }","duration":"253.35678ms","start":"2024-09-23T13:25:35.296527Z","end":"2024-09-23T13:25:35.549883Z","steps":["trace[1294112056] 'agreement among raft nodes before linearized reading'  (duration: 249.932811ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T13:25:35.585444Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.084397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-23T13:25:35.594064Z","caller":"traceutil/trace.go:171","msg":"trace[876578767] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:347; }","duration":"297.060415ms","start":"2024-09-23T13:25:35.296987Z","end":"2024-09-23T13:25:35.594048Z","steps":["trace[876578767] 'agreement among raft nodes before linearized reading'  (duration: 288.392254ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:25:38.966922Z","caller":"traceutil/trace.go:171","msg":"trace[1773688844] transaction","detail":"{read_only:false; response_revision:683; number_of_response:1; }","duration":"178.146917ms","start":"2024-09-23T13:25:38.788751Z","end":"2024-09-23T13:25:38.966898Z","steps":["trace[1773688844] 'process raft request'  (duration: 178.069176ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:25:38.969539Z","caller":"traceutil/trace.go:171","msg":"trace[208207188] linearizableReadLoop","detail":"{readStateIndex:708; appliedIndex:708; }","duration":"179.399909ms","start":"2024-09-23T13:25:38.790120Z","end":"2024-09-23T13:25:38.969520Z","steps":["trace[208207188] 'read index received'  (duration: 179.395093ms)","trace[208207188] 'applied index is now lower than readState.Index'  (duration: 3.47µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T13:25:38.995483Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.007661ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-133262\" ","response":"range_response_count:1 size:5745"}
	{"level":"info","ts":"2024-09-23T13:25:38.995537Z","caller":"traceutil/trace.go:171","msg":"trace[562549751] range","detail":"{range_begin:/registry/minions/addons-133262; range_end:; response_count:1; response_revision:683; }","duration":"207.069297ms","start":"2024-09-23T13:25:38.788455Z","end":"2024-09-23T13:25:38.995524Z","steps":["trace[562549751] 'agreement among raft nodes before linearized reading'  (duration: 181.122667ms)","trace[562549751] 'range keys from in-memory index tree'  (duration: 25.81627ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T13:25:38.995831Z","caller":"traceutil/trace.go:171","msg":"trace[317156449] transaction","detail":"{read_only:false; response_revision:684; number_of_response:1; }","duration":"198.798657ms","start":"2024-09-23T13:25:38.797023Z","end":"2024-09-23T13:25:38.995821Z","steps":["trace[317156449] 'process raft request'  (duration: 172.804094ms)","trace[317156449] 'compare'  (duration: 25.418387ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T13:35:21.883294Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1525}
	{"level":"info","ts":"2024-09-23T13:35:21.915067Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1525,"took":"31.250229ms","hash":2887434094,"current-db-size-bytes":6610944,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":3317760,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-23T13:35:21.915114Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2887434094,"revision":1525,"compact-revision":-1}
	{"level":"info","ts":"2024-09-23T13:40:21.889636Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1941}
	{"level":"info","ts":"2024-09-23T13:40:21.907973Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1941,"took":"17.79001ms","hash":785404094,"current-db-size-bytes":6610944,"current-db-size":"6.6 MB","current-db-size-in-use-bytes":4599808,"current-db-size-in-use":"4.6 MB"}
	{"level":"info","ts":"2024-09-23T13:40:21.908026Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":785404094,"revision":1941,"compact-revision":1525}
	
	
	==> gcp-auth [334680bd78e33f77a791df37c38d964e3d859e5ec3bc4717d639109d0e519646] <==
	2024/09/23 13:28:03 Ready to marshal response ...
	2024/09/23 13:28:03 Ready to write response ...
	2024/09/23 13:36:17 Ready to marshal response ...
	2024/09/23 13:36:17 Ready to write response ...
	2024/09/23 13:36:25 Ready to marshal response ...
	2024/09/23 13:36:25 Ready to write response ...
	2024/09/23 13:36:25 Ready to marshal response ...
	2024/09/23 13:36:25 Ready to write response ...
	2024/09/23 13:36:35 Ready to marshal response ...
	2024/09/23 13:36:35 Ready to write response ...
	2024/09/23 13:37:21 Ready to marshal response ...
	2024/09/23 13:37:21 Ready to write response ...
	2024/09/23 13:37:21 Ready to marshal response ...
	2024/09/23 13:37:21 Ready to write response ...
	2024/09/23 13:37:21 http: TLS handshake error from 10.244.0.1:4500: EOF
	2024/09/23 13:37:21 Ready to marshal response ...
	2024/09/23 13:37:21 Ready to write response ...
	2024/09/23 13:37:44 Ready to marshal response ...
	2024/09/23 13:37:44 Ready to write response ...
	2024/09/23 13:38:07 Ready to marshal response ...
	2024/09/23 13:38:07 Ready to write response ...
	2024/09/23 13:38:35 Ready to marshal response ...
	2024/09/23 13:38:35 Ready to write response ...
	2024/09/23 13:40:57 Ready to marshal response ...
	2024/09/23 13:40:57 Ready to write response ...
	
	
	==> kernel <==
	 13:43:07 up 15:25,  0 users,  load average: 0.17, 0.36, 1.12
	Linux addons-133262 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [de10c80270b5c435a07bf2cdd2c740feaca9c44e62cd5d9e684a2896313bde78] <==
	I0923 13:41:06.083650       1 main.go:299] handling current node
	I0923 13:41:16.083472       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:41:16.083593       1 main.go:299] handling current node
	I0923 13:41:26.083592       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:41:26.083627       1 main.go:299] handling current node
	I0923 13:41:36.082800       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:41:36.082832       1 main.go:299] handling current node
	I0923 13:41:46.082971       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:41:46.083027       1 main.go:299] handling current node
	I0923 13:41:56.083696       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:41:56.083734       1 main.go:299] handling current node
	I0923 13:42:06.082952       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:42:06.083003       1 main.go:299] handling current node
	I0923 13:42:16.083588       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:42:16.083621       1 main.go:299] handling current node
	I0923 13:42:26.082971       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:42:26.083012       1 main.go:299] handling current node
	I0923 13:42:36.083338       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:42:36.083377       1 main.go:299] handling current node
	I0923 13:42:46.082922       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:42:46.083064       1 main.go:299] handling current node
	I0923 13:42:56.083558       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:42:56.083591       1 main.go:299] handling current node
	I0923 13:43:06.082728       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:43:06.082763       1 main.go:299] handling current node
	
	
	==> kube-apiserver [9a2762b26053fe1e23ed48078b84ac1d1716e7c148f7cdfcad8856f20fd15d23] <==
	I0923 13:27:41.883186       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0923 13:36:36.296973       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0923 13:36:36.308652       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0923 13:36:36.324442       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0923 13:36:51.321281       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0923 13:37:21.628239       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.15.199"}
	I0923 13:37:57.080666       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0923 13:38:22.800811       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 13:38:22.800859       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 13:38:22.852138       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 13:38:22.852382       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 13:38:22.905433       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 13:38:22.905588       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 13:38:22.951836       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 13:38:22.951889       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 13:38:23.063669       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 13:38:23.063803       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0923 13:38:23.952514       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0923 13:38:24.064411       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0923 13:38:24.075015       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0923 13:38:29.808252       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 13:38:30.849285       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0923 13:38:35.396198       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0923 13:38:35.713316       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.7.99"}
	I0923 13:40:57.534167       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.25.111"}
	
	
	==> kube-controller-manager [3cf91c4e890abea1b695f0a46f5af238b0a5f268e5f0166d11badc05231c743a] <==
	I0923 13:41:10.518869       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0923 13:41:34.983659       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:41:34.983704       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 13:41:35.622915       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:41:35.622959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 13:41:37.100777       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-133262"
	W0923 13:41:39.735396       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:41:39.735442       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 13:42:00.255884       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:42:00.255932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 13:42:18.082043       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:42:18.082087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 13:42:22.287160       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:42:22.287202       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 13:42:27.087566       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:42:27.087613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 13:42:50.306200       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:42:50.306248       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 13:42:53.718051       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:42:53.718095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 13:42:57.807641       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:42:57.807684       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 13:43:05.694766       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="6.072µs"
	W0923 13:43:06.313686       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 13:43:06.313731       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [6e1da3a73993a53f6b725952a4fe73eee122a33d92fd39897c80ddf7390d476d] <==
	I0923 13:25:36.937749       1 server_linux.go:66] "Using iptables proxy"
	I0923 13:25:37.338915       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 13:25:37.338986       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 13:25:37.413835       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 13:25:37.413972       1 server_linux.go:169] "Using iptables Proxier"
	I0923 13:25:37.415844       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 13:25:37.416398       1 server.go:483] "Version info" version="v1.31.1"
	I0923 13:25:37.416459       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:25:37.423427       1 config.go:199] "Starting service config controller"
	I0923 13:25:37.423523       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 13:25:37.423586       1 config.go:105] "Starting endpoint slice config controller"
	I0923 13:25:37.423616       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 13:25:37.424042       1 config.go:328] "Starting node config controller"
	I0923 13:25:37.424095       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 13:25:37.584302       1 shared_informer.go:320] Caches are synced for node config
	I0923 13:25:37.584359       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 13:25:37.623656       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [1ef3f97eb64734d3bc13db5b077af47265ae085903534bd9d901c7b8fc97af09] <==
	W0923 13:25:24.580187       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 13:25:24.582008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:24.580220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 13:25:24.582107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:24.580318       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 13:25:24.582203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:24.580371       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 13:25:24.582354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.418271       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 13:25:25.418450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.587950       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 13:25:25.588075       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.616405       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 13:25:25.616450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.642462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 13:25:25.647975       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.666673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 13:25:25.666824       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.673612       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 13:25:25.673747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.684524       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 13:25:25.684652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:25:25.718405       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 13:25:25.718452       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 13:25:27.559102       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 13:41:59 addons-133262 kubelet[1502]: E0923 13:41:59.761060    1502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cfa3221c-db2f-4216-995d-eb27c9ca5f19"
	Sep 23 13:42:07 addons-133262 kubelet[1502]: E0923 13:42:07.488033    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098927487746227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:42:07 addons-133262 kubelet[1502]: E0923 13:42:07.488080    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098927487746227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:42:14 addons-133262 kubelet[1502]: E0923 13:42:14.761462    1502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cfa3221c-db2f-4216-995d-eb27c9ca5f19"
	Sep 23 13:42:17 addons-133262 kubelet[1502]: E0923 13:42:17.490117    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098937489826585,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:42:17 addons-133262 kubelet[1502]: E0923 13:42:17.490170    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098937489826585,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:42:27 addons-133262 kubelet[1502]: E0923 13:42:27.492126    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098947491888012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:42:27 addons-133262 kubelet[1502]: E0923 13:42:27.492179    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098947491888012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:42:29 addons-133262 kubelet[1502]: E0923 13:42:29.760359    1502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cfa3221c-db2f-4216-995d-eb27c9ca5f19"
	Sep 23 13:42:37 addons-133262 kubelet[1502]: E0923 13:42:37.495061    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098957494773660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:42:37 addons-133262 kubelet[1502]: E0923 13:42:37.495099    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098957494773660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:42:43 addons-133262 kubelet[1502]: E0923 13:42:43.761004    1502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cfa3221c-db2f-4216-995d-eb27c9ca5f19"
	Sep 23 13:42:47 addons-133262 kubelet[1502]: E0923 13:42:47.497889    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098967497647573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:42:47 addons-133262 kubelet[1502]: E0923 13:42:47.497929    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098967497647573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:42:55 addons-133262 kubelet[1502]: E0923 13:42:55.761624    1502 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cfa3221c-db2f-4216-995d-eb27c9ca5f19"
	Sep 23 13:42:57 addons-133262 kubelet[1502]: E0923 13:42:57.500589    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098977500368967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:42:57 addons-133262 kubelet[1502]: E0923 13:42:57.500622    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098977500368967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:43:06 addons-133262 kubelet[1502]: I0923 13:43:06.991025    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6zx4\" (UniqueName: \"kubernetes.io/projected/6d7335f6-5dfb-4227-9606-8d8b1b126d40-kube-api-access-h6zx4\") pod \"6d7335f6-5dfb-4227-9606-8d8b1b126d40\" (UID: \"6d7335f6-5dfb-4227-9606-8d8b1b126d40\") "
	Sep 23 13:43:06 addons-133262 kubelet[1502]: I0923 13:43:06.991094    1502 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6d7335f6-5dfb-4227-9606-8d8b1b126d40-tmp-dir\") pod \"6d7335f6-5dfb-4227-9606-8d8b1b126d40\" (UID: \"6d7335f6-5dfb-4227-9606-8d8b1b126d40\") "
	Sep 23 13:43:06 addons-133262 kubelet[1502]: I0923 13:43:06.991559    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d7335f6-5dfb-4227-9606-8d8b1b126d40-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6d7335f6-5dfb-4227-9606-8d8b1b126d40" (UID: "6d7335f6-5dfb-4227-9606-8d8b1b126d40"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 23 13:43:06 addons-133262 kubelet[1502]: I0923 13:43:06.994782    1502 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d7335f6-5dfb-4227-9606-8d8b1b126d40-kube-api-access-h6zx4" (OuterVolumeSpecName: "kube-api-access-h6zx4") pod "6d7335f6-5dfb-4227-9606-8d8b1b126d40" (UID: "6d7335f6-5dfb-4227-9606-8d8b1b126d40"). InnerVolumeSpecName "kube-api-access-h6zx4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 13:43:07 addons-133262 kubelet[1502]: I0923 13:43:07.092166    1502 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-h6zx4\" (UniqueName: \"kubernetes.io/projected/6d7335f6-5dfb-4227-9606-8d8b1b126d40-kube-api-access-h6zx4\") on node \"addons-133262\" DevicePath \"\""
	Sep 23 13:43:07 addons-133262 kubelet[1502]: I0923 13:43:07.092237    1502 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6d7335f6-5dfb-4227-9606-8d8b1b126d40-tmp-dir\") on node \"addons-133262\" DevicePath \"\""
	Sep 23 13:43:07 addons-133262 kubelet[1502]: E0923 13:43:07.504642    1502 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098987504212292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:43:07 addons-133262 kubelet[1502]: E0923 13:43:07.504678    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727098987504212292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572294,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [846b4d1bcfbe362e097d8174a0b2808c301ad53a9959a5c8577ae8669f7374d8] <==
	I0923 13:26:17.410205       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 13:26:17.427878       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 13:26:17.428011       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 13:26:17.451710       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 13:26:17.452694       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-133262_255eadcb-81cb-4ff4-8832-d04e319c6694!
	I0923 13:26:17.453702       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1f2b24c6-4123-42bd-a56d-cf65e312df77", APIVersion:"v1", ResourceVersion:"901", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-133262_255eadcb-81cb-4ff4-8832-d04e319c6694 became leader
	I0923 13:26:17.552987       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-133262_255eadcb-81cb-4ff4-8832-d04e319c6694!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-133262 -n addons-133262
helpers_test.go:261: (dbg) Run:  kubectl --context addons-133262 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-133262 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-133262 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-133262/192.168.49.2
	Start Time:       Mon, 23 Sep 2024 13:28:03 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2xb2r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2xb2r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  15m                   default-scheduler  Successfully assigned default/busybox to addons-133262
	  Normal   Pulling    13m (x4 over 15m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     13m (x4 over 15m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     13m (x4 over 15m)     kubelet            Error: ErrImagePull
	  Warning  Failed     13m (x6 over 15m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m58s (x43 over 15m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (329.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (126.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-952506 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0923 13:56:44.466901 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:57:12.168653 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:58:03.743046 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-952506 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m1.723840188s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:589: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-952506       NotReady   control-plane   10m     v1.31.1
	ha-952506-m02   Ready      control-plane   9m45s   v1.31.1
	ha-952506-m04   Ready      <none>          7m25s   v1.31.1

                                                
                                                
-- /stdout --
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-952506
helpers_test.go:235: (dbg) docker inspect ha-952506:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bdebc792e7c455cbc6f78a638b85a950a5e11ee9e984295d53f0bb3d9e2f2bef",
	        "Created": "2024-09-23T13:47:46.714493318Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2444001,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T13:56:17.940834596Z",
	            "FinishedAt": "2024-09-23T13:56:17.171434584Z"
	        },
	        "Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
	        "ResolvConfPath": "/var/lib/docker/containers/bdebc792e7c455cbc6f78a638b85a950a5e11ee9e984295d53f0bb3d9e2f2bef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bdebc792e7c455cbc6f78a638b85a950a5e11ee9e984295d53f0bb3d9e2f2bef/hostname",
	        "HostsPath": "/var/lib/docker/containers/bdebc792e7c455cbc6f78a638b85a950a5e11ee9e984295d53f0bb3d9e2f2bef/hosts",
	        "LogPath": "/var/lib/docker/containers/bdebc792e7c455cbc6f78a638b85a950a5e11ee9e984295d53f0bb3d9e2f2bef/bdebc792e7c455cbc6f78a638b85a950a5e11ee9e984295d53f0bb3d9e2f2bef-json.log",
	        "Name": "/ha-952506",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-952506:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-952506",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4dd03bdc573fec183c2198625c6f7c705e26a68ffa24210c592f0cfef6028007-init/diff:/var/lib/docker/overlay2/cb21b5e82393f0d5264c7db3ef721bc402a1fb078a3835cf5b3c87b0c534f7c3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4dd03bdc573fec183c2198625c6f7c705e26a68ffa24210c592f0cfef6028007/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4dd03bdc573fec183c2198625c6f7c705e26a68ffa24210c592f0cfef6028007/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4dd03bdc573fec183c2198625c6f7c705e26a68ffa24210c592f0cfef6028007/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-952506",
	                "Source": "/var/lib/docker/volumes/ha-952506/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-952506",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-952506",
	                "name.minikube.sigs.k8s.io": "ha-952506",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "97aac0a69411f1d7347df07ba09d826836671c738d1ff60d29725b9dcd8d35f7",
	            "SandboxKey": "/var/run/docker/netns/97aac0a69411",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35794"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35795"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35798"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35796"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35797"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-952506": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e2123346e879e32d5375a169a6dc4868e1790ad827a83930c458e4c12323b57c",
	                    "EndpointID": "10de4e2b7ef258fb7f840980bc2313475e605bf52b5df8160ce2bd6dffb669c8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-952506",
	                        "bdebc792e7c4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-952506 -n ha-952506
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ha-952506 logs -n 25: (2.06244306s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-952506 cp ha-952506-m03:/home/docker/cp-test.txt                              | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:51 UTC |
	|         | ha-952506-m04:/home/docker/cp-test_ha-952506-m03_ha-952506-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-952506 ssh -n                                                                 | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:51 UTC |
	|         | ha-952506-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-952506 ssh -n ha-952506-m04 sudo cat                                          | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:51 UTC |
	|         | /home/docker/cp-test_ha-952506-m03_ha-952506-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-952506 cp testdata/cp-test.txt                                                | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:51 UTC |
	|         | ha-952506-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-952506 ssh -n                                                                 | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:51 UTC |
	|         | ha-952506-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-952506 cp ha-952506-m04:/home/docker/cp-test.txt                              | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:51 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4226284073/001/cp-test_ha-952506-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-952506 ssh -n                                                                 | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:51 UTC |
	|         | ha-952506-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-952506 cp ha-952506-m04:/home/docker/cp-test.txt                              | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:51 UTC |
	|         | ha-952506:/home/docker/cp-test_ha-952506-m04_ha-952506.txt                       |           |         |         |                     |                     |
	| ssh     | ha-952506 ssh -n                                                                 | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:51 UTC |
	|         | ha-952506-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-952506 ssh -n ha-952506 sudo cat                                              | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:51 UTC |
	|         | /home/docker/cp-test_ha-952506-m04_ha-952506.txt                                 |           |         |         |                     |                     |
	| cp      | ha-952506 cp ha-952506-m04:/home/docker/cp-test.txt                              | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:51 UTC |
	|         | ha-952506-m02:/home/docker/cp-test_ha-952506-m04_ha-952506-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-952506 ssh -n                                                                 | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:51 UTC |
	|         | ha-952506-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-952506 ssh -n ha-952506-m02 sudo cat                                          | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:51 UTC |
	|         | /home/docker/cp-test_ha-952506-m04_ha-952506-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-952506 cp ha-952506-m04:/home/docker/cp-test.txt                              | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:51 UTC |
	|         | ha-952506-m03:/home/docker/cp-test_ha-952506-m04_ha-952506-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-952506 ssh -n                                                                 | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:51 UTC |
	|         | ha-952506-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-952506 ssh -n ha-952506-m03 sudo cat                                          | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:51 UTC |
	|         | /home/docker/cp-test_ha-952506-m04_ha-952506-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-952506 node stop m02 -v=7                                                     | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-952506 node start m02 -v=7                                                    | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:51 UTC | 23 Sep 24 13:52 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-952506 -v=7                                                           | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-952506 -v=7                                                                | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:52 UTC | 23 Sep 24 13:52 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-952506 --wait=true -v=7                                                    | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:52 UTC | 23 Sep 24 13:55 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-952506                                                                | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:55 UTC |                     |
	| node    | ha-952506 node delete m03 -v=7                                                   | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:55 UTC | 23 Sep 24 13:55 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-952506 stop -v=7                                                              | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:55 UTC | 23 Sep 24 13:56 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-952506 --wait=true                                                         | ha-952506 | jenkins | v1.34.0 | 23 Sep 24 13:56 UTC | 23 Sep 24 13:58 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:56:17
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:56:17.577097 2443810 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:56:17.577317 2443810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:56:17.577345 2443810 out.go:358] Setting ErrFile to fd 2...
	I0923 13:56:17.577367 2443810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:56:17.578499 2443810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-2377681/.minikube/bin
	I0923 13:56:17.578919 2443810 out.go:352] Setting JSON to false
	I0923 13:56:17.579813 2443810 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":56320,"bootTime":1727043457,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0923 13:56:17.579892 2443810 start.go:139] virtualization:  
	I0923 13:56:17.584837 2443810 out.go:177] * [ha-952506] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 13:56:17.587705 2443810 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:56:17.587715 2443810 notify.go:220] Checking for updates...
	I0923 13:56:17.590525 2443810 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:56:17.593047 2443810 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-2377681/kubeconfig
	I0923 13:56:17.595930 2443810 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-2377681/.minikube
	I0923 13:56:17.598462 2443810 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 13:56:17.600850 2443810 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:56:17.603955 2443810 config.go:182] Loaded profile config "ha-952506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:56:17.604490 2443810 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:56:17.633964 2443810 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 13:56:17.634116 2443810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:56:17.695539 2443810 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:41 SystemTime:2024-09-23 13:56:17.686243485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:56:17.695648 2443810 docker.go:318] overlay module found
	I0923 13:56:17.698461 2443810 out.go:177] * Using the docker driver based on existing profile
	I0923 13:56:17.700770 2443810 start.go:297] selected driver: docker
	I0923 13:56:17.700789 2443810 start.go:901] validating driver "docker" against &{Name:ha-952506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-952506 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logvie
wer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:56:17.700979 2443810 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:56:17.701092 2443810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:56:17.753298 2443810 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:41 SystemTime:2024-09-23 13:56:17.743842677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:56:17.753771 2443810 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:56:17.753800 2443810 cni.go:84] Creating CNI manager for ""
	I0923 13:56:17.753838 2443810 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0923 13:56:17.753895 2443810 start.go:340] cluster config:
	{Name:ha-952506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-952506 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvid
ia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:56:17.756875 2443810 out.go:177] * Starting "ha-952506" primary control-plane node in "ha-952506" cluster
	I0923 13:56:17.759319 2443810 cache.go:121] Beginning downloading kic base image for docker with crio
	I0923 13:56:17.761948 2443810 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 13:56:17.764572 2443810 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:56:17.764637 2443810 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0923 13:56:17.764650 2443810 cache.go:56] Caching tarball of preloaded images
	I0923 13:56:17.764663 2443810 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 13:56:17.764732 2443810 preload.go:172] Found /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0923 13:56:17.764742 2443810 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 13:56:17.764901 2443810 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/config.json ...
	I0923 13:56:17.783405 2443810 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon, skipping pull
	I0923 13:56:17.783429 2443810 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in daemon, skipping load
	I0923 13:56:17.783444 2443810 cache.go:194] Successfully downloaded all kic artifacts
	I0923 13:56:17.783467 2443810 start.go:360] acquireMachinesLock for ha-952506: {Name:mk11b6e961a8653ad2df4ec0a4b782782748819f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:56:17.783537 2443810 start.go:364] duration metric: took 47.285µs to acquireMachinesLock for "ha-952506"
	I0923 13:56:17.783562 2443810 start.go:96] Skipping create...Using existing machine configuration
	I0923 13:56:17.783571 2443810 fix.go:54] fixHost starting: 
	I0923 13:56:17.783842 2443810 cli_runner.go:164] Run: docker container inspect ha-952506 --format={{.State.Status}}
	I0923 13:56:17.799631 2443810 fix.go:112] recreateIfNeeded on ha-952506: state=Stopped err=<nil>
	W0923 13:56:17.799661 2443810 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 13:56:17.802689 2443810 out.go:177] * Restarting existing docker container for "ha-952506" ...
	I0923 13:56:17.805350 2443810 cli_runner.go:164] Run: docker start ha-952506
	I0923 13:56:18.119743 2443810 cli_runner.go:164] Run: docker container inspect ha-952506 --format={{.State.Status}}
	I0923 13:56:18.142645 2443810 kic.go:430] container "ha-952506" state is running.
	I0923 13:56:18.143037 2443810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-952506
	I0923 13:56:18.166010 2443810 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/config.json ...
	I0923 13:56:18.167395 2443810 machine.go:93] provisionDockerMachine start ...
	I0923 13:56:18.167485 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506
	I0923 13:56:18.188810 2443810 main.go:141] libmachine: Using SSH client type: native
	I0923 13:56:18.189102 2443810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35794 <nil> <nil>}
	I0923 13:56:18.189266 2443810 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:56:18.189887 2443810 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0923 13:56:21.321827 2443810 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-952506
	
	I0923 13:56:21.321852 2443810 ubuntu.go:169] provisioning hostname "ha-952506"
	I0923 13:56:21.321913 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506
	I0923 13:56:21.338506 2443810 main.go:141] libmachine: Using SSH client type: native
	I0923 13:56:21.338770 2443810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35794 <nil> <nil>}
	I0923 13:56:21.338788 2443810 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-952506 && echo "ha-952506" | sudo tee /etc/hostname
	I0923 13:56:21.486488 2443810 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-952506
	
	I0923 13:56:21.486585 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506
	I0923 13:56:21.504155 2443810 main.go:141] libmachine: Using SSH client type: native
	I0923 13:56:21.504467 2443810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35794 <nil> <nil>}
	I0923 13:56:21.504489 2443810 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-952506' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-952506/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-952506' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:56:21.638695 2443810 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:56:21.638724 2443810 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19690-2377681/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-2377681/.minikube}
	I0923 13:56:21.638750 2443810 ubuntu.go:177] setting up certificates
	I0923 13:56:21.638758 2443810 provision.go:84] configureAuth start
	I0923 13:56:21.638818 2443810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-952506
	I0923 13:56:21.655443 2443810 provision.go:143] copyHostCerts
	I0923 13:56:21.655489 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19690-2377681/.minikube/key.pem
	I0923 13:56:21.655523 2443810 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-2377681/.minikube/key.pem, removing ...
	I0923 13:56:21.655536 2443810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-2377681/.minikube/key.pem
	I0923 13:56:21.655620 2443810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-2377681/.minikube/key.pem (1679 bytes)
	I0923 13:56:21.655725 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.pem
	I0923 13:56:21.655760 2443810 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.pem, removing ...
	I0923 13:56:21.655768 2443810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.pem
	I0923 13:56:21.655800 2443810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.pem (1078 bytes)
	I0923 13:56:21.655897 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19690-2377681/.minikube/cert.pem
	I0923 13:56:21.655918 2443810 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-2377681/.minikube/cert.pem, removing ...
	I0923 13:56:21.655924 2443810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-2377681/.minikube/cert.pem
	I0923 13:56:21.655949 2443810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-2377681/.minikube/cert.pem (1123 bytes)
	I0923 13:56:21.656007 2443810 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca-key.pem org=jenkins.ha-952506 san=[127.0.0.1 192.168.49.2 ha-952506 localhost minikube]
	I0923 13:56:21.882520 2443810 provision.go:177] copyRemoteCerts
	I0923 13:56:21.882590 2443810 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:56:21.882644 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506
	I0923 13:56:21.899398 2443810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35794 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/ha-952506/id_rsa Username:docker}
	I0923 13:56:21.996389 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 13:56:21.996475 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 13:56:22.024998 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 13:56:22.025063 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0923 13:56:22.053375 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 13:56:22.053438 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 13:56:22.078792 2443810 provision.go:87] duration metric: took 440.019704ms to configureAuth
	I0923 13:56:22.078868 2443810 ubuntu.go:193] setting minikube options for container-runtime
	I0923 13:56:22.079136 2443810 config.go:182] Loaded profile config "ha-952506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:56:22.079256 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506
	I0923 13:56:22.096894 2443810 main.go:141] libmachine: Using SSH client type: native
	I0923 13:56:22.097181 2443810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35794 <nil> <nil>}
	I0923 13:56:22.097200 2443810 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 13:56:22.524646 2443810 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 13:56:22.524669 2443810 machine.go:96] duration metric: took 4.357251407s to provisionDockerMachine
	I0923 13:56:22.524680 2443810 start.go:293] postStartSetup for "ha-952506" (driver="docker")
	I0923 13:56:22.524691 2443810 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:56:22.524747 2443810 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:56:22.524785 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506
	I0923 13:56:22.555981 2443810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35794 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/ha-952506/id_rsa Username:docker}
	I0923 13:56:22.651433 2443810 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:56:22.654664 2443810 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 13:56:22.654704 2443810 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 13:56:22.654716 2443810 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 13:56:22.654724 2443810 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 13:56:22.654738 2443810 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-2377681/.minikube/addons for local assets ...
	I0923 13:56:22.654796 2443810 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-2377681/.minikube/files for local assets ...
	I0923 13:56:22.654880 2443810 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-2377681/.minikube/files/etc/ssl/certs/23830702.pem -> 23830702.pem in /etc/ssl/certs
	I0923 13:56:22.654892 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/files/etc/ssl/certs/23830702.pem -> /etc/ssl/certs/23830702.pem
	I0923 13:56:22.654994 2443810 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 13:56:22.663395 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/files/etc/ssl/certs/23830702.pem --> /etc/ssl/certs/23830702.pem (1708 bytes)
	I0923 13:56:22.687664 2443810 start.go:296] duration metric: took 162.968222ms for postStartSetup
	I0923 13:56:22.687746 2443810 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:56:22.687807 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506
	I0923 13:56:22.704253 2443810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35794 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/ha-952506/id_rsa Username:docker}
	I0923 13:56:22.795507 2443810 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 13:56:22.800336 2443810 fix.go:56] duration metric: took 5.016757804s for fixHost
	I0923 13:56:22.800359 2443810 start.go:83] releasing machines lock for "ha-952506", held for 5.016808175s
	I0923 13:56:22.800451 2443810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-952506
	I0923 13:56:22.816982 2443810 ssh_runner.go:195] Run: cat /version.json
	I0923 13:56:22.817037 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506
	I0923 13:56:22.817050 2443810 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 13:56:22.817221 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506
	I0923 13:56:22.846128 2443810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35794 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/ha-952506/id_rsa Username:docker}
	I0923 13:56:22.847823 2443810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35794 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/ha-952506/id_rsa Username:docker}
	I0923 13:56:23.079926 2443810 ssh_runner.go:195] Run: systemctl --version
	I0923 13:56:23.084781 2443810 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 13:56:23.226453 2443810 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 13:56:23.230958 2443810 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:56:23.239932 2443810 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0923 13:56:23.240013 2443810 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:56:23.250845 2443810 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 13:56:23.250869 2443810 start.go:495] detecting cgroup driver to use...
	I0923 13:56:23.250903 2443810 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 13:56:23.250952 2443810 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 13:56:23.264217 2443810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:56:23.275584 2443810 docker.go:217] disabling cri-docker service (if available) ...
	I0923 13:56:23.275655 2443810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 13:56:23.289319 2443810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 13:56:23.300897 2443810 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 13:56:23.381055 2443810 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 13:56:23.467416 2443810 docker.go:233] disabling docker service ...
	I0923 13:56:23.467506 2443810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 13:56:23.480736 2443810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 13:56:23.492538 2443810 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 13:56:23.580244 2443810 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 13:56:23.663493 2443810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 13:56:23.675690 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:56:23.692747 2443810 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 13:56:23.692860 2443810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:56:23.703861 2443810 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 13:56:23.703976 2443810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:56:23.714886 2443810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:56:23.725817 2443810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:56:23.736058 2443810 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:56:23.746120 2443810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:56:23.756895 2443810 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:56:23.766678 2443810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:56:23.776548 2443810 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:56:23.785678 2443810 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:56:23.794752 2443810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:56:23.876588 2443810 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 13:56:23.995819 2443810 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 13:56:23.995923 2443810 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 13:56:23.999685 2443810 start.go:563] Will wait 60s for crictl version
	I0923 13:56:23.999763 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:56:24.003787 2443810 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:56:24.045620 2443810 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0923 13:56:24.045722 2443810 ssh_runner.go:195] Run: crio --version
	I0923 13:56:24.084162 2443810 ssh_runner.go:195] Run: crio --version
	I0923 13:56:24.124180 2443810 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0923 13:56:24.126650 2443810 cli_runner.go:164] Run: docker network inspect ha-952506 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 13:56:24.142124 2443810 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 13:56:24.145755 2443810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:56:24.157269 2443810 kubeadm.go:883] updating cluster {Name:ha-952506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-952506 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false met
allb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 13:56:24.157438 2443810 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:56:24.157512 2443810 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:56:24.208040 2443810 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 13:56:24.208066 2443810 crio.go:433] Images already preloaded, skipping extraction
	I0923 13:56:24.208141 2443810 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 13:56:24.246298 2443810 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 13:56:24.246341 2443810 cache_images.go:84] Images are preloaded, skipping loading
	I0923 13:56:24.246351 2443810 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0923 13:56:24.246457 2443810 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-952506 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-952506 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:56:24.246555 2443810 ssh_runner.go:195] Run: crio config
	I0923 13:56:24.298257 2443810 cni.go:84] Creating CNI manager for ""
	I0923 13:56:24.298280 2443810 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0923 13:56:24.298299 2443810 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 13:56:24.298337 2443810 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-952506 NodeName:ha-952506 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 13:56:24.298497 2443810 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-952506"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 13:56:24.298519 2443810 kube-vip.go:115] generating kube-vip config ...
	I0923 13:56:24.298576 2443810 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0923 13:56:24.311783 2443810 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 13:56:24.311929 2443810 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 13:56:24.312004 2443810 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:56:24.321086 2443810 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:56:24.321172 2443810 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0923 13:56:24.330201 2443810 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0923 13:56:24.348700 2443810 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:56:24.366193 2443810 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0923 13:56:24.384931 2443810 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 13:56:24.403315 2443810 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0923 13:56:24.406847 2443810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:56:24.417683 2443810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:56:24.503769 2443810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:56:24.517914 2443810 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506 for IP: 192.168.49.2
	I0923 13:56:24.517936 2443810 certs.go:194] generating shared ca certs ...
	I0923 13:56:24.517952 2443810 certs.go:226] acquiring lock for ca certs: {Name:mka74fca5f9586bfec26165232a0abe6b9527b85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:56:24.518103 2443810 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key
	I0923 13:56:24.518151 2443810 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key
	I0923 13:56:24.518162 2443810 certs.go:256] generating profile certs ...
	I0923 13:56:24.518237 2443810 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/client.key
	I0923 13:56:24.518270 2443810 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/apiserver.key.f8b19486
	I0923 13:56:24.518293 2443810 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/apiserver.crt.f8b19486 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0923 13:56:25.237780 2443810 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/apiserver.crt.f8b19486 ...
	I0923 13:56:25.237814 2443810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/apiserver.crt.f8b19486: {Name:mk797b0336eff7aa7b9d736c6c39053bc14ddd70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:56:25.238020 2443810 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/apiserver.key.f8b19486 ...
	I0923 13:56:25.238035 2443810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/apiserver.key.f8b19486: {Name:mk47bf3721fc8201140c42a5b1f3ee98239c074b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:56:25.238131 2443810 certs.go:381] copying /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/apiserver.crt.f8b19486 -> /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/apiserver.crt
	I0923 13:56:25.238290 2443810 certs.go:385] copying /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/apiserver.key.f8b19486 -> /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/apiserver.key
	I0923 13:56:25.238458 2443810 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/proxy-client.key
	I0923 13:56:25.238478 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 13:56:25.238494 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 13:56:25.238512 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 13:56:25.238527 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 13:56:25.238543 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 13:56:25.238560 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 13:56:25.238576 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 13:56:25.238592 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 13:56:25.238645 2443810 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/2383070.pem (1338 bytes)
	W0923 13:56:25.238679 2443810 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/2383070_empty.pem, impossibly tiny 0 bytes
	I0923 13:56:25.238691 2443810 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 13:56:25.238718 2443810 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem (1078 bytes)
	I0923 13:56:25.238751 2443810 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem (1123 bytes)
	I0923 13:56:25.238777 2443810 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem (1679 bytes)
	I0923 13:56:25.238828 2443810 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/files/etc/ssl/certs/23830702.pem (1708 bytes)
	I0923 13:56:25.238863 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/files/etc/ssl/certs/23830702.pem -> /usr/share/ca-certificates/23830702.pem
	I0923 13:56:25.238885 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:56:25.238901 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/2383070.pem -> /usr/share/ca-certificates/2383070.pem
	I0923 13:56:25.239599 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:56:25.265476 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 13:56:25.290596 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:56:25.314958 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:56:25.339829 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0923 13:56:25.364130 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 13:56:25.388259 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 13:56:25.412156 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 13:56:25.436198 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/files/etc/ssl/certs/23830702.pem --> /usr/share/ca-certificates/23830702.pem (1708 bytes)
	I0923 13:56:25.460456 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:56:25.484874 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/2383070.pem --> /usr/share/ca-certificates/2383070.pem (1338 bytes)
	I0923 13:56:25.509501 2443810 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 13:56:25.527132 2443810 ssh_runner.go:195] Run: openssl version
	I0923 13:56:25.532472 2443810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:56:25.542073 2443810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:56:25.545617 2443810 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 13:25 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:56:25.545743 2443810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:56:25.552687 2443810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:56:25.562080 2443810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2383070.pem && ln -fs /usr/share/ca-certificates/2383070.pem /etc/ssl/certs/2383070.pem"
	I0923 13:56:25.571514 2443810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2383070.pem
	I0923 13:56:25.575118 2443810 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 13:44 /usr/share/ca-certificates/2383070.pem
	I0923 13:56:25.575187 2443810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2383070.pem
	I0923 13:56:25.582164 2443810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2383070.pem /etc/ssl/certs/51391683.0"
	I0923 13:56:25.591335 2443810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23830702.pem && ln -fs /usr/share/ca-certificates/23830702.pem /etc/ssl/certs/23830702.pem"
	I0923 13:56:25.600513 2443810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23830702.pem
	I0923 13:56:25.604021 2443810 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 13:44 /usr/share/ca-certificates/23830702.pem
	I0923 13:56:25.604132 2443810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23830702.pem
	I0923 13:56:25.611161 2443810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23830702.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 13:56:25.620075 2443810 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:56:25.623695 2443810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 13:56:25.630419 2443810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 13:56:25.637692 2443810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 13:56:25.644584 2443810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 13:56:25.651333 2443810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 13:56:25.657879 2443810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 13:56:25.665134 2443810 kubeadm.go:392] StartCluster: {Name:ha-952506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-952506 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metall
b:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:56:25.665266 2443810 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 13:56:25.665330 2443810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 13:56:25.702446 2443810 cri.go:89] found id: ""
	I0923 13:56:25.702595 2443810 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 13:56:25.711726 2443810 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 13:56:25.711745 2443810 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 13:56:25.711795 2443810 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 13:56:25.720556 2443810 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 13:56:25.720998 2443810 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-952506" does not appear in /home/jenkins/minikube-integration/19690-2377681/kubeconfig
	I0923 13:56:25.721104 2443810 kubeconfig.go:62] /home/jenkins/minikube-integration/19690-2377681/kubeconfig needs updating (will repair): [kubeconfig missing "ha-952506" cluster setting kubeconfig missing "ha-952506" context setting]
	I0923 13:56:25.721430 2443810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/kubeconfig: {Name:mk1c3c49c69db07ab1c6462bef79c6f07c9c4b4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:56:25.721821 2443810 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19690-2377681/kubeconfig
	I0923 13:56:25.722068 2443810 kapi.go:59] client config for ha-952506: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/client.crt", KeyFile:"/home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/client.key", CAFile:"/home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a16ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 13:56:25.722781 2443810 cert_rotation.go:140] Starting client certificate rotation controller
	I0923 13:56:25.722976 2443810 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 13:56:25.731585 2443810 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0923 13:56:25.731668 2443810 kubeadm.go:597] duration metric: took 19.915677ms to restartPrimaryControlPlane
	I0923 13:56:25.731684 2443810 kubeadm.go:394] duration metric: took 66.557641ms to StartCluster
	I0923 13:56:25.731701 2443810 settings.go:142] acquiring lock: {Name:mkec0ac22c7afe2712cd8676389ce937f473d18b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:56:25.731772 2443810 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19690-2377681/kubeconfig
	I0923 13:56:25.732905 2443810 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-2377681/kubeconfig: {Name:mk1c3c49c69db07ab1c6462bef79c6f07c9c4b4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:56:25.733131 2443810 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 13:56:25.733159 2443810 start.go:241] waiting for startup goroutines ...
	I0923 13:56:25.733175 2443810 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 13:56:25.733618 2443810 config.go:182] Loaded profile config "ha-952506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:56:25.740668 2443810 out.go:177] * Enabled addons: 
	I0923 13:56:25.748149 2443810 addons.go:510] duration metric: took 14.964894ms for enable addons: enabled=[]
	I0923 13:56:25.748213 2443810 start.go:246] waiting for cluster config update ...
	I0923 13:56:25.748229 2443810 start.go:255] writing updated cluster config ...
	I0923 13:56:25.754884 2443810 out.go:201] 
	I0923 13:56:25.761003 2443810 config.go:182] Loaded profile config "ha-952506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:56:25.761145 2443810 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/config.json ...
	I0923 13:56:25.767135 2443810 out.go:177] * Starting "ha-952506-m02" control-plane node in "ha-952506" cluster
	I0923 13:56:25.771520 2443810 cache.go:121] Beginning downloading kic base image for docker with crio
	I0923 13:56:25.776394 2443810 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 13:56:25.781271 2443810 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:56:25.781303 2443810 cache.go:56] Caching tarball of preloaded images
	I0923 13:56:25.781371 2443810 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 13:56:25.781416 2443810 preload.go:172] Found /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0923 13:56:25.781431 2443810 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 13:56:25.781552 2443810 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/config.json ...
	I0923 13:56:25.799721 2443810 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon, skipping pull
	I0923 13:56:25.799745 2443810 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in daemon, skipping load
	I0923 13:56:25.799765 2443810 cache.go:194] Successfully downloaded all kic artifacts
	I0923 13:56:25.799792 2443810 start.go:360] acquireMachinesLock for ha-952506-m02: {Name:mk04a6540fc26ba00f28e59043b4a1101789d717 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:56:25.799864 2443810 start.go:364] duration metric: took 47.983µs to acquireMachinesLock for "ha-952506-m02"
	I0923 13:56:25.799889 2443810 start.go:96] Skipping create...Using existing machine configuration
	I0923 13:56:25.799898 2443810 fix.go:54] fixHost starting: m02
	I0923 13:56:25.800172 2443810 cli_runner.go:164] Run: docker container inspect ha-952506-m02 --format={{.State.Status}}
	I0923 13:56:25.815501 2443810 fix.go:112] recreateIfNeeded on ha-952506-m02: state=Stopped err=<nil>
	W0923 13:56:25.815534 2443810 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 13:56:25.818694 2443810 out.go:177] * Restarting existing docker container for "ha-952506-m02" ...
	I0923 13:56:25.820589 2443810 cli_runner.go:164] Run: docker start ha-952506-m02
	I0923 13:56:26.092972 2443810 cli_runner.go:164] Run: docker container inspect ha-952506-m02 --format={{.State.Status}}
	I0923 13:56:26.110935 2443810 kic.go:430] container "ha-952506-m02" state is running.
	I0923 13:56:26.111505 2443810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-952506-m02
	I0923 13:56:26.136406 2443810 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/config.json ...
	I0923 13:56:26.136655 2443810 machine.go:93] provisionDockerMachine start ...
	I0923 13:56:26.136722 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506-m02
	I0923 13:56:26.161927 2443810 main.go:141] libmachine: Using SSH client type: native
	I0923 13:56:26.162177 2443810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35799 <nil> <nil>}
	I0923 13:56:26.162193 2443810 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:56:26.164151 2443810 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0923 13:56:29.353127 2443810 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-952506-m02
	
	I0923 13:56:29.353193 2443810 ubuntu.go:169] provisioning hostname "ha-952506-m02"
	I0923 13:56:29.353289 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506-m02
	I0923 13:56:29.383992 2443810 main.go:141] libmachine: Using SSH client type: native
	I0923 13:56:29.384260 2443810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35799 <nil> <nil>}
	I0923 13:56:29.384272 2443810 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-952506-m02 && echo "ha-952506-m02" | sudo tee /etc/hostname
	I0923 13:56:29.608801 2443810 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-952506-m02
	
	I0923 13:56:29.608934 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506-m02
	I0923 13:56:29.649261 2443810 main.go:141] libmachine: Using SSH client type: native
	I0923 13:56:29.649491 2443810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35799 <nil> <nil>}
	I0923 13:56:29.649507 2443810 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-952506-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-952506-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-952506-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:56:29.835807 2443810 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:56:29.835897 2443810 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19690-2377681/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-2377681/.minikube}
	I0923 13:56:29.835937 2443810 ubuntu.go:177] setting up certificates
	I0923 13:56:29.835962 2443810 provision.go:84] configureAuth start
	I0923 13:56:29.836044 2443810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-952506-m02
	I0923 13:56:29.872029 2443810 provision.go:143] copyHostCerts
	I0923 13:56:29.872073 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.pem
	I0923 13:56:29.872105 2443810 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.pem, removing ...
	I0923 13:56:29.872112 2443810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.pem
	I0923 13:56:29.872188 2443810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.pem (1078 bytes)
	I0923 13:56:29.872273 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19690-2377681/.minikube/cert.pem
	I0923 13:56:29.872294 2443810 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-2377681/.minikube/cert.pem, removing ...
	I0923 13:56:29.872299 2443810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-2377681/.minikube/cert.pem
	I0923 13:56:29.872325 2443810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-2377681/.minikube/cert.pem (1123 bytes)
	I0923 13:56:29.872370 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19690-2377681/.minikube/key.pem
	I0923 13:56:29.872386 2443810 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-2377681/.minikube/key.pem, removing ...
	I0923 13:56:29.872390 2443810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-2377681/.minikube/key.pem
	I0923 13:56:29.872415 2443810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-2377681/.minikube/key.pem (1679 bytes)
	I0923 13:56:29.872467 2443810 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca-key.pem org=jenkins.ha-952506-m02 san=[127.0.0.1 192.168.49.3 ha-952506-m02 localhost minikube]
	I0923 13:56:30.150100 2443810 provision.go:177] copyRemoteCerts
	I0923 13:56:30.150300 2443810 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:56:30.150401 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506-m02
	I0923 13:56:30.175945 2443810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35799 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/ha-952506-m02/id_rsa Username:docker}
	I0923 13:56:30.290865 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 13:56:30.290931 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 13:56:30.319177 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 13:56:30.319261 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 13:56:30.346739 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 13:56:30.346807 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 13:56:30.378588 2443810 provision.go:87] duration metric: took 542.599273ms to configureAuth
	I0923 13:56:30.378620 2443810 ubuntu.go:193] setting minikube options for container-runtime
	I0923 13:56:30.378930 2443810 config.go:182] Loaded profile config "ha-952506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:56:30.379044 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506-m02
	I0923 13:56:30.403830 2443810 main.go:141] libmachine: Using SSH client type: native
	I0923 13:56:30.404082 2443810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35799 <nil> <nil>}
	I0923 13:56:30.404097 2443810 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 13:56:30.854579 2443810 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 13:56:30.854601 2443810 machine.go:96] duration metric: took 4.717936209s to provisionDockerMachine
	I0923 13:56:30.854612 2443810 start.go:293] postStartSetup for "ha-952506-m02" (driver="docker")
	I0923 13:56:30.854624 2443810 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:56:30.854692 2443810 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:56:30.854733 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506-m02
	I0923 13:56:30.871201 2443810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35799 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/ha-952506-m02/id_rsa Username:docker}
	I0923 13:56:30.967532 2443810 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:56:30.970753 2443810 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 13:56:30.970795 2443810 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 13:56:30.970806 2443810 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 13:56:30.970813 2443810 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 13:56:30.970829 2443810 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-2377681/.minikube/addons for local assets ...
	I0923 13:56:30.970888 2443810 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-2377681/.minikube/files for local assets ...
	I0923 13:56:30.970974 2443810 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-2377681/.minikube/files/etc/ssl/certs/23830702.pem -> 23830702.pem in /etc/ssl/certs
	I0923 13:56:30.970986 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/files/etc/ssl/certs/23830702.pem -> /etc/ssl/certs/23830702.pem
	I0923 13:56:30.971096 2443810 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 13:56:30.979825 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/files/etc/ssl/certs/23830702.pem --> /etc/ssl/certs/23830702.pem (1708 bytes)
	I0923 13:56:31.007539 2443810 start.go:296] duration metric: took 152.911454ms for postStartSetup
	I0923 13:56:31.007700 2443810 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:56:31.007786 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506-m02
	I0923 13:56:31.039849 2443810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35799 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/ha-952506-m02/id_rsa Username:docker}
	I0923 13:56:31.165720 2443810 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 13:56:31.188599 2443810 fix.go:56] duration metric: took 5.388693829s for fixHost
	I0923 13:56:31.188625 2443810 start.go:83] releasing machines lock for "ha-952506-m02", held for 5.38875004s
	I0923 13:56:31.188693 2443810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-952506-m02
	I0923 13:56:31.217104 2443810 out.go:177] * Found network options:
	I0923 13:56:31.220305 2443810 out.go:177]   - NO_PROXY=192.168.49.2
	W0923 13:56:31.224532 2443810 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 13:56:31.224576 2443810 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 13:56:31.224655 2443810 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 13:56:31.224704 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506-m02
	I0923 13:56:31.224945 2443810 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 13:56:31.224990 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506-m02
	I0923 13:56:31.273904 2443810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35799 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/ha-952506-m02/id_rsa Username:docker}
	I0923 13:56:31.277245 2443810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35799 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/ha-952506-m02/id_rsa Username:docker}
	I0923 13:56:31.827744 2443810 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 13:56:31.840206 2443810 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:56:31.870168 2443810 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0923 13:56:31.870390 2443810 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:56:31.902110 2443810 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 13:56:31.902149 2443810 start.go:495] detecting cgroup driver to use...
	I0923 13:56:31.902184 2443810 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 13:56:31.902247 2443810 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 13:56:31.936760 2443810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:56:31.955232 2443810 docker.go:217] disabling cri-docker service (if available) ...
	I0923 13:56:31.955353 2443810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 13:56:31.980291 2443810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 13:56:32.002528 2443810 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 13:56:32.324645 2443810 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 13:56:32.604765 2443810 docker.go:233] disabling docker service ...
	I0923 13:56:32.604867 2443810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 13:56:32.664743 2443810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 13:56:32.700522 2443810 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 13:56:33.036158 2443810 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 13:56:33.292977 2443810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 13:56:33.354937 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:56:33.412794 2443810 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 13:56:33.412872 2443810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:56:33.445150 2443810 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 13:56:33.445233 2443810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:56:33.489100 2443810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:56:33.546948 2443810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:56:33.594967 2443810 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:56:33.656802 2443810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:56:33.717174 2443810 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:56:33.765422 2443810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:56:33.825856 2443810 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:56:33.845406 2443810 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:56:33.864697 2443810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:56:34.122485 2443810 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 13:56:34.579753 2443810 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 13:56:34.579853 2443810 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 13:56:34.589815 2443810 start.go:563] Will wait 60s for crictl version
	I0923 13:56:34.589885 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:56:34.593553 2443810 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:56:34.684440 2443810 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0923 13:56:34.684523 2443810 ssh_runner.go:195] Run: crio --version
	I0923 13:56:34.749353 2443810 ssh_runner.go:195] Run: crio --version
	I0923 13:56:34.810923 2443810 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0923 13:56:34.812619 2443810 out.go:177]   - env NO_PROXY=192.168.49.2
	I0923 13:56:34.814024 2443810 cli_runner.go:164] Run: docker network inspect ha-952506 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 13:56:34.834594 2443810 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 13:56:34.838287 2443810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:56:34.857785 2443810 mustload.go:65] Loading cluster: ha-952506
	I0923 13:56:34.858024 2443810 config.go:182] Loaded profile config "ha-952506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:56:34.858305 2443810 cli_runner.go:164] Run: docker container inspect ha-952506 --format={{.State.Status}}
	I0923 13:56:34.891755 2443810 host.go:66] Checking if "ha-952506" exists ...
	I0923 13:56:34.892038 2443810 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506 for IP: 192.168.49.3
	I0923 13:56:34.892046 2443810 certs.go:194] generating shared ca certs ...
	I0923 13:56:34.892060 2443810 certs.go:226] acquiring lock for ca certs: {Name:mka74fca5f9586bfec26165232a0abe6b9527b85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:56:34.892175 2443810 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key
	I0923 13:56:34.892223 2443810 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key
	I0923 13:56:34.892229 2443810 certs.go:256] generating profile certs ...
	I0923 13:56:34.892302 2443810 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/client.key
	I0923 13:56:34.892363 2443810 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/apiserver.key.590c642b
	I0923 13:56:34.892405 2443810 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/proxy-client.key
	I0923 13:56:34.892414 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 13:56:34.892426 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 13:56:34.892437 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 13:56:34.892447 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 13:56:34.892461 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 13:56:34.892472 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 13:56:34.892483 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 13:56:34.892493 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 13:56:34.892543 2443810 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/2383070.pem (1338 bytes)
	W0923 13:56:34.892570 2443810 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/2383070_empty.pem, impossibly tiny 0 bytes
	I0923 13:56:34.892578 2443810 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 13:56:34.892612 2443810 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem (1078 bytes)
	I0923 13:56:34.892634 2443810 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem (1123 bytes)
	I0923 13:56:34.892656 2443810 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem (1679 bytes)
	I0923 13:56:34.892701 2443810 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/files/etc/ssl/certs/23830702.pem (1708 bytes)
	I0923 13:56:34.892736 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/files/etc/ssl/certs/23830702.pem -> /usr/share/ca-certificates/23830702.pem
	I0923 13:56:34.892752 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:56:34.892762 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/2383070.pem -> /usr/share/ca-certificates/2383070.pem
	I0923 13:56:34.892818 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506
	I0923 13:56:34.917924 2443810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35794 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/ha-952506/id_rsa Username:docker}
	I0923 13:56:35.030595 2443810 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0923 13:56:35.041191 2443810 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0923 13:56:35.077032 2443810 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0923 13:56:35.100235 2443810 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0923 13:56:35.136993 2443810 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0923 13:56:35.153106 2443810 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0923 13:56:35.205341 2443810 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0923 13:56:35.229609 2443810 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0923 13:56:35.253046 2443810 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0923 13:56:35.260268 2443810 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0923 13:56:35.288617 2443810 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0923 13:56:35.301371 2443810 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0923 13:56:35.321166 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:56:35.347402 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 13:56:35.382403 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:56:35.417060 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:56:35.449070 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0923 13:56:35.476322 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 13:56:35.504356 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 13:56:35.531734 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 13:56:35.565639 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/files/etc/ssl/certs/23830702.pem --> /usr/share/ca-certificates/23830702.pem (1708 bytes)
	I0923 13:56:35.601396 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:56:35.641712 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/2383070.pem --> /usr/share/ca-certificates/2383070.pem (1338 bytes)
	I0923 13:56:35.684019 2443810 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0923 13:56:35.706980 2443810 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0923 13:56:35.728033 2443810 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0923 13:56:35.754047 2443810 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0923 13:56:35.788691 2443810 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0923 13:56:35.816478 2443810 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0923 13:56:35.850631 2443810 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0923 13:56:35.884689 2443810 ssh_runner.go:195] Run: openssl version
	I0923 13:56:35.896198 2443810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23830702.pem && ln -fs /usr/share/ca-certificates/23830702.pem /etc/ssl/certs/23830702.pem"
	I0923 13:56:35.911986 2443810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23830702.pem
	I0923 13:56:35.920713 2443810 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 13:44 /usr/share/ca-certificates/23830702.pem
	I0923 13:56:35.920842 2443810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23830702.pem
	I0923 13:56:35.931181 2443810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23830702.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 13:56:35.940395 2443810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:56:35.956199 2443810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:56:35.960647 2443810 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 13:25 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:56:35.960714 2443810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:56:35.970033 2443810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:56:35.987101 2443810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2383070.pem && ln -fs /usr/share/ca-certificates/2383070.pem /etc/ssl/certs/2383070.pem"
	I0923 13:56:35.999792 2443810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2383070.pem
	I0923 13:56:36.005099 2443810 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 13:44 /usr/share/ca-certificates/2383070.pem
	I0923 13:56:36.005251 2443810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2383070.pem
	I0923 13:56:36.022900 2443810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2383070.pem /etc/ssl/certs/51391683.0"
	I0923 13:56:36.033492 2443810 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:56:36.037836 2443810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 13:56:36.049700 2443810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 13:56:36.064017 2443810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 13:56:36.072663 2443810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 13:56:36.088259 2443810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 13:56:36.098073 2443810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 13:56:36.107289 2443810 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.1 crio true true} ...
	I0923 13:56:36.107462 2443810 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-952506-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-952506 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:56:36.107510 2443810 kube-vip.go:115] generating kube-vip config ...
	I0923 13:56:36.107608 2443810 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0923 13:56:36.152772 2443810 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 13:56:36.152891 2443810 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 13:56:36.152982 2443810 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:56:36.166019 2443810 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:56:36.166095 2443810 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0923 13:56:36.179896 2443810 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0923 13:56:36.212845 2443810 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:56:36.241802 2443810 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 13:56:36.281954 2443810 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0923 13:56:36.285475 2443810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:56:36.301048 2443810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:56:36.499530 2443810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:56:36.521235 2443810 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 13:56:36.521777 2443810 config.go:182] Loaded profile config "ha-952506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:56:36.524156 2443810 out.go:177] * Verifying Kubernetes components...
	I0923 13:56:36.526154 2443810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:56:36.714944 2443810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:56:36.733695 2443810 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19690-2377681/kubeconfig
	I0923 13:56:36.733958 2443810 kapi.go:59] client config for ha-952506: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/client.crt", KeyFile:"/home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/client.key", CAFile:"/home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a16ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0923 13:56:36.734016 2443810 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0923 13:56:36.734232 2443810 node_ready.go:35] waiting up to 6m0s for node "ha-952506-m02" to be "Ready" ...
	I0923 13:56:36.734324 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m02
	I0923 13:56:36.734331 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:36.734340 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:36.734343 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:48.353624 2443810 round_trippers.go:574] Response Status: 500 Internal Server Error in 11619 milliseconds
	I0923 13:56:48.354033 2443810 node_ready.go:53] error getting node "ha-952506-m02": etcdserver: request timed out
	I0923 13:56:48.354098 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m02
	I0923 13:56:48.354108 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:48.354123 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:48.354132 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:55.212550 2443810 round_trippers.go:574] Response Status: 200 OK in 6858 milliseconds
	I0923 13:56:55.214537 2443810 node_ready.go:49] node "ha-952506-m02" has status "Ready":"True"
	I0923 13:56:55.214563 2443810 node_ready.go:38] duration metric: took 18.480318701s for node "ha-952506-m02" to be "Ready" ...
	I0923 13:56:55.214574 2443810 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:56:55.214618 2443810 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 13:56:55.214628 2443810 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 13:56:55.214694 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0923 13:56:55.214699 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:55.214707 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:55.214711 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:55.277144 2443810 round_trippers.go:574] Response Status: 429 Too Many Requests in 62 milliseconds
	I0923 13:56:56.277580 2443810 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0923 13:56:56.277628 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0923 13:56:56.277634 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:56.277646 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:56.277650 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:56.334823 2443810 round_trippers.go:574] Response Status: 200 OK in 57 milliseconds
	I0923 13:56:56.346738 2443810 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9sjjq" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:56.349124 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:56:56.349146 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:56.349156 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:56.349162 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:56.366931 2443810 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0923 13:56:56.367680 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:56:56.367700 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:56.367710 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:56.367714 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:56.391167 2443810 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0923 13:56:56.391802 2443810 pod_ready.go:93] pod "coredns-7c65d6cfc9-9sjjq" in "kube-system" namespace has status "Ready":"True"
	I0923 13:56:56.391827 2443810 pod_ready.go:82] duration metric: took 45.05276ms for pod "coredns-7c65d6cfc9-9sjjq" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:56.391840 2443810 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zwchv" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:56.391914 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zwchv
	I0923 13:56:56.391925 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:56.391933 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:56.391937 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:56.403757 2443810 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0923 13:56:56.404534 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:56:56.404560 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:56.404569 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:56.404577 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:56.423210 2443810 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0923 13:56:56.423818 2443810 pod_ready.go:93] pod "coredns-7c65d6cfc9-zwchv" in "kube-system" namespace has status "Ready":"True"
	I0923 13:56:56.423847 2443810 pod_ready.go:82] duration metric: took 31.99918ms for pod "coredns-7c65d6cfc9-zwchv" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:56.423865 2443810 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-952506" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:56.423944 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-952506
	I0923 13:56:56.423955 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:56.423964 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:56.423970 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:56.436396 2443810 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0923 13:56:56.437421 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:56:56.437449 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:56.437458 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:56.437462 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:56.444214 2443810 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:56:56.444765 2443810 pod_ready.go:93] pod "etcd-ha-952506" in "kube-system" namespace has status "Ready":"True"
	I0923 13:56:56.444791 2443810 pod_ready.go:82] duration metric: took 20.918196ms for pod "etcd-ha-952506" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:56.444805 2443810 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-952506-m02" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:56.444887 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-952506-m02
	I0923 13:56:56.444899 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:56.444909 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:56.444915 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:56.456057 2443810 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0923 13:56:56.456751 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m02
	I0923 13:56:56.456774 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:56.456785 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:56.456794 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:56.468750 2443810 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0923 13:56:56.469459 2443810 pod_ready.go:93] pod "etcd-ha-952506-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 13:56:56.469496 2443810 pod_ready.go:82] duration metric: took 24.663297ms for pod "etcd-ha-952506-m02" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:56.469514 2443810 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-952506-m03" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:56.469592 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-952506-m03
	I0923 13:56:56.469603 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:56.469611 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:56.469615 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:56.475686 2443810 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:56:56.478577 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m03
	I0923 13:56:56.478602 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:56.478611 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:56.478616 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:56.493821 2443810 round_trippers.go:574] Response Status: 404 Not Found in 15 milliseconds
	I0923 13:56:56.493990 2443810 pod_ready.go:98] node "ha-952506-m03" hosting pod "etcd-ha-952506-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-952506-m03": nodes "ha-952506-m03" not found
	I0923 13:56:56.494018 2443810 pod_ready.go:82] duration metric: took 24.495351ms for pod "etcd-ha-952506-m03" in "kube-system" namespace to be "Ready" ...
	E0923 13:56:56.494034 2443810 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-952506-m03" hosting pod "etcd-ha-952506-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-952506-m03": nodes "ha-952506-m03" not found
	I0923 13:56:56.494054 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-952506" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:56.678240 2443810 request.go:632] Waited for 184.095893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-952506
	I0923 13:56:56.678357 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-952506
	I0923 13:56:56.678370 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:56.678380 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:56.678388 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:56.686517 2443810 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 13:56:56.878196 2443810 request.go:632] Waited for 190.271566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:56:56.878272 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:56:56.878284 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:56.878339 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:56.878349 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:56.887669 2443810 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 13:56:56.888698 2443810 pod_ready.go:93] pod "kube-apiserver-ha-952506" in "kube-system" namespace has status "Ready":"True"
	I0923 13:56:56.888732 2443810 pod_ready.go:82] duration metric: took 394.663673ms for pod "kube-apiserver-ha-952506" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:56.888750 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-952506-m02" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:57.077749 2443810 request.go:632] Waited for 188.928828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-952506-m02
	I0923 13:56:57.077825 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-952506-m02
	I0923 13:56:57.077836 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:57.077844 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:57.077860 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:57.089823 2443810 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0923 13:56:57.277945 2443810 request.go:632] Waited for 187.302922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506-m02
	I0923 13:56:57.278025 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m02
	I0923 13:56:57.278039 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:57.278080 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:57.278089 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:57.281198 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:56:57.283078 2443810 pod_ready.go:93] pod "kube-apiserver-ha-952506-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 13:56:57.283104 2443810 pod_ready.go:82] duration metric: took 394.344985ms for pod "kube-apiserver-ha-952506-m02" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:57.283116 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-952506-m03" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:57.477941 2443810 request.go:632] Waited for 194.74414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-952506-m03
	I0923 13:56:57.478018 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-952506-m03
	I0923 13:56:57.478031 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:57.478066 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:57.478076 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:57.480873 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:56:57.678607 2443810 request.go:632] Waited for 196.885234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506-m03
	I0923 13:56:57.678668 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m03
	I0923 13:56:57.678674 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:57.678683 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:57.678693 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:57.681848 2443810 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0923 13:56:57.682032 2443810 pod_ready.go:98] node "ha-952506-m03" hosting pod "kube-apiserver-ha-952506-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-952506-m03": nodes "ha-952506-m03" not found
	I0923 13:56:57.682061 2443810 pod_ready.go:82] duration metric: took 398.93704ms for pod "kube-apiserver-ha-952506-m03" in "kube-system" namespace to be "Ready" ...
	E0923 13:56:57.682077 2443810 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-952506-m03" hosting pod "kube-apiserver-ha-952506-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-952506-m03": nodes "ha-952506-m03" not found
	I0923 13:56:57.682086 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-952506" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:57.878267 2443810 request.go:632] Waited for 196.095649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-952506
	I0923 13:56:57.878344 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-952506
	I0923 13:56:57.878351 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:57.878360 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:57.878399 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:57.887857 2443810 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 13:56:58.077943 2443810 request.go:632] Waited for 189.260029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:56:58.078016 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:56:58.078027 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:58.078036 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:58.078062 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:58.084508 2443810 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:56:58.085546 2443810 pod_ready.go:93] pod "kube-controller-manager-ha-952506" in "kube-system" namespace has status "Ready":"True"
	I0923 13:56:58.085570 2443810 pod_ready.go:82] duration metric: took 403.470437ms for pod "kube-controller-manager-ha-952506" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:58.085596 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-952506-m02" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:58.278397 2443810 request.go:632] Waited for 192.713518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-952506-m02
	I0923 13:56:58.278528 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-952506-m02
	I0923 13:56:58.278542 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:58.278553 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:58.278558 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:58.281826 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:56:58.477989 2443810 request.go:632] Waited for 195.335166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506-m02
	I0923 13:56:58.478095 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m02
	I0923 13:56:58.478128 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:58.478145 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:58.478149 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:58.481235 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:56:58.482035 2443810 pod_ready.go:93] pod "kube-controller-manager-ha-952506-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 13:56:58.482057 2443810 pod_ready.go:82] duration metric: took 396.447237ms for pod "kube-controller-manager-ha-952506-m02" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:58.482070 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-952506-m03" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:58.678055 2443810 request.go:632] Waited for 195.918529ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-952506-m03
	I0923 13:56:58.678134 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-952506-m03
	I0923 13:56:58.678148 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:58.678157 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:58.678162 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:58.681281 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:56:58.878265 2443810 request.go:632] Waited for 196.133744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506-m03
	I0923 13:56:58.878397 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m03
	I0923 13:56:58.878472 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:58.878502 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:58.878534 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:58.882127 2443810 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0923 13:56:58.882361 2443810 pod_ready.go:98] node "ha-952506-m03" hosting pod "kube-controller-manager-ha-952506-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-952506-m03": nodes "ha-952506-m03" not found
	I0923 13:56:58.882399 2443810 pod_ready.go:82] duration metric: took 400.320367ms for pod "kube-controller-manager-ha-952506-m03" in "kube-system" namespace to be "Ready" ...
	E0923 13:56:58.882443 2443810 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-952506-m03" hosting pod "kube-controller-manager-ha-952506-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-952506-m03": nodes "ha-952506-m03" not found
	I0923 13:56:58.882464 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-497f8" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:59.078553 2443810 request.go:632] Waited for 195.987187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-497f8
	I0923 13:56:59.078711 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-497f8
	I0923 13:56:59.078747 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:59.078774 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:59.078795 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:59.081811 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:56:59.277715 2443810 request.go:632] Waited for 195.146953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506-m03
	I0923 13:56:59.277826 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m03
	I0923 13:56:59.277864 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:59.277891 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:59.277910 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:59.280884 2443810 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0923 13:56:59.281231 2443810 pod_ready.go:98] node "ha-952506-m03" hosting pod "kube-proxy-497f8" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-952506-m03": nodes "ha-952506-m03" not found
	I0923 13:56:59.281288 2443810 pod_ready.go:82] duration metric: took 398.782385ms for pod "kube-proxy-497f8" in "kube-system" namespace to be "Ready" ...
	E0923 13:56:59.281313 2443810 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-952506-m03" hosting pod "kube-proxy-497f8" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-952506-m03": nodes "ha-952506-m03" not found
	I0923 13:56:59.281334 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9w2p2" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:59.477599 2443810 request.go:632] Waited for 196.137337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9w2p2
	I0923 13:56:59.477662 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9w2p2
	I0923 13:56:59.477673 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:59.477682 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:59.477686 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:59.480878 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:56:59.677748 2443810 request.go:632] Waited for 196.249803ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506-m04
	I0923 13:56:59.677812 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m04
	I0923 13:56:59.677823 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:59.677832 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:59.677844 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:59.680882 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:56:59.681492 2443810 pod_ready.go:93] pod "kube-proxy-9w2p2" in "kube-system" namespace has status "Ready":"True"
	I0923 13:56:59.681513 2443810 pod_ready.go:82] duration metric: took 400.136864ms for pod "kube-proxy-9w2p2" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:59.681524 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qqlbp" in "kube-system" namespace to be "Ready" ...
	I0923 13:56:59.878462 2443810 request.go:632] Waited for 196.871721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqlbp
	I0923 13:56:59.878530 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqlbp
	I0923 13:56:59.878542 2443810 round_trippers.go:469] Request Headers:
	I0923 13:56:59.878551 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:56:59.878560 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:56:59.881571 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:57:00.078657 2443810 request.go:632] Waited for 196.411539ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:57:00.078760 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:57:00.078776 2443810 round_trippers.go:469] Request Headers:
	I0923 13:57:00.078786 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:57:00.078791 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:57:00.083345 2443810 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:57:00.086106 2443810 pod_ready.go:93] pod "kube-proxy-qqlbp" in "kube-system" namespace has status "Ready":"True"
	I0923 13:57:00.086178 2443810 pod_ready.go:82] duration metric: took 404.644924ms for pod "kube-proxy-qqlbp" in "kube-system" namespace to be "Ready" ...
	I0923 13:57:00.086198 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s598q" in "kube-system" namespace to be "Ready" ...
	I0923 13:57:00.278075 2443810 request.go:632] Waited for 191.748806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s598q
	I0923 13:57:00.278143 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s598q
	I0923 13:57:00.278149 2443810 round_trippers.go:469] Request Headers:
	I0923 13:57:00.278165 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:57:00.278181 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:57:00.281456 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:57:00.478634 2443810 request.go:632] Waited for 196.344389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506-m02
	I0923 13:57:00.478692 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m02
	I0923 13:57:00.478701 2443810 round_trippers.go:469] Request Headers:
	I0923 13:57:00.478711 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:57:00.478717 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:57:00.481912 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:57:00.482966 2443810 pod_ready.go:93] pod "kube-proxy-s598q" in "kube-system" namespace has status "Ready":"True"
	I0923 13:57:00.482993 2443810 pod_ready.go:82] duration metric: took 396.785058ms for pod "kube-proxy-s598q" in "kube-system" namespace to be "Ready" ...
	I0923 13:57:00.483006 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-952506" in "kube-system" namespace to be "Ready" ...
	I0923 13:57:00.678516 2443810 request.go:632] Waited for 195.422049ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-952506
	I0923 13:57:00.678577 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-952506
	I0923 13:57:00.678583 2443810 round_trippers.go:469] Request Headers:
	I0923 13:57:00.678591 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:57:00.678596 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:57:00.681798 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:57:00.877646 2443810 request.go:632] Waited for 195.204051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:57:00.877737 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:57:00.877776 2443810 round_trippers.go:469] Request Headers:
	I0923 13:57:00.877792 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:57:00.877797 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:57:00.880995 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:57:00.881604 2443810 pod_ready.go:93] pod "kube-scheduler-ha-952506" in "kube-system" namespace has status "Ready":"True"
	I0923 13:57:00.881629 2443810 pod_ready.go:82] duration metric: took 398.613815ms for pod "kube-scheduler-ha-952506" in "kube-system" namespace to be "Ready" ...
	I0923 13:57:00.881642 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-952506-m02" in "kube-system" namespace to be "Ready" ...
	I0923 13:57:01.078616 2443810 request.go:632] Waited for 196.881238ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-952506-m02
	I0923 13:57:01.078678 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-952506-m02
	I0923 13:57:01.078685 2443810 round_trippers.go:469] Request Headers:
	I0923 13:57:01.078699 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:57:01.078707 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:57:01.081878 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:57:01.278126 2443810 request.go:632] Waited for 195.323031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506-m02
	I0923 13:57:01.278186 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m02
	I0923 13:57:01.278195 2443810 round_trippers.go:469] Request Headers:
	I0923 13:57:01.278205 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:57:01.278212 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:57:01.281191 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:57:01.281874 2443810 pod_ready.go:93] pod "kube-scheduler-ha-952506-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 13:57:01.281897 2443810 pod_ready.go:82] duration metric: took 400.246514ms for pod "kube-scheduler-ha-952506-m02" in "kube-system" namespace to be "Ready" ...
	I0923 13:57:01.281911 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-952506-m03" in "kube-system" namespace to be "Ready" ...
	I0923 13:57:01.478446 2443810 request.go:632] Waited for 196.451168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-952506-m03
	I0923 13:57:01.478509 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-952506-m03
	I0923 13:57:01.478515 2443810 round_trippers.go:469] Request Headers:
	I0923 13:57:01.478523 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:57:01.478529 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:57:01.481575 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:57:01.678550 2443810 request.go:632] Waited for 196.341034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506-m03
	I0923 13:57:01.678678 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m03
	I0923 13:57:01.678692 2443810 round_trippers.go:469] Request Headers:
	I0923 13:57:01.678702 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:57:01.678707 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:57:01.681863 2443810 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0923 13:57:01.681993 2443810 pod_ready.go:98] node "ha-952506-m03" hosting pod "kube-scheduler-ha-952506-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-952506-m03": nodes "ha-952506-m03" not found
	I0923 13:57:01.682014 2443810 pod_ready.go:82] duration metric: took 400.096011ms for pod "kube-scheduler-ha-952506-m03" in "kube-system" namespace to be "Ready" ...
	E0923 13:57:01.682026 2443810 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-952506-m03" hosting pod "kube-scheduler-ha-952506-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-952506-m03": nodes "ha-952506-m03" not found
	I0923 13:57:01.682036 2443810 pod_ready.go:39] duration metric: took 6.467450101s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:57:01.682051 2443810 api_server.go:52] waiting for apiserver process to appear ...
	I0923 13:57:01.682117 2443810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:57:01.693126 2443810 api_server.go:72] duration metric: took 25.171792242s to wait for apiserver process to appear ...
	I0923 13:57:01.693154 2443810 api_server.go:88] waiting for apiserver healthz status ...
	I0923 13:57:01.693176 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:01.700876 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:01.700912 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:02.193450 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:02.201108 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:02.201144 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:02.693647 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:02.701318 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:02.701349 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:03.193940 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:03.201647 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:03.201677 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:03.694270 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:03.704037 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:03.704070 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:04.193652 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:04.201249 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:04.201279 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:04.693772 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:04.701414 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:04.701443 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:05.194062 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:05.201757 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:05.201787 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:05.693372 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:05.706779 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:05.706811 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:06.193562 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:06.201514 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:06.201555 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:06.693690 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:06.701335 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:06.701364 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:07.194099 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:07.201762 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:07.201803 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:07.693783 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:07.701409 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:07.701439 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:08.194101 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:08.201649 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:08.201679 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:08.693300 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:08.700930 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:08.700958 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:09.193308 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:09.201159 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:09.201195 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:09.693320 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:09.700947 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:09.700977 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:10.193333 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:10.201090 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:10.201123 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:10.693281 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:10.712025 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:10.712104 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:11.193468 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:11.241676 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:11.241707 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:11.693846 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:11.745368 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:11.745402 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:12.194085 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:12.236034 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:12.236069 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:12.693808 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:12.708184 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:12.708213 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:13.193537 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:13.202882 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:13.202913 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:13.694279 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:13.704427 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:13.704512 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:14.194044 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:14.203518 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:14.203608 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:14.694230 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:14.710360 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:14.710384 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:15.193903 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:15.201524 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:15.201553 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:15.694153 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:15.702639 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:15.702672 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:16.194075 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:16.201946 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:16.201975 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:16.693313 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:16.701003 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:16.701044 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:17.193302 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:17.202403 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:17.202431 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:17.694261 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:17.701978 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:17.702007 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:18.193296 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:18.202622 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:18.202655 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:18.693257 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:18.701221 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:18.701253 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:19.193646 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:19.203043 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:19.203074 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:19.693553 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:19.701932 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:19.701962 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:20.193321 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:20.201220 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:20.201260 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:20.693782 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:20.701437 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:20.701467 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:21.194061 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:21.201849 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:21.201884 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:21.693301 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:21.700953 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:21.701000 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:22.193471 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:22.201227 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:22.201262 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:22.693845 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:22.709939 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:22.709972 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:23.193466 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:23.201236 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:23.201266 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:23.693825 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:23.702053 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:23.702084 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:24.193329 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:24.201505 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:24.201540 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:24.694195 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:24.701815 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:24.701844 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:25.193302 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:25.200980 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:25.201017 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:25.693623 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:25.701516 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:25.701543 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:26.193639 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:26.201645 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:26.201674 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:26.694255 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:26.701989 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:26.702031 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:27.193381 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:27.201569 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:27.201598 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:27.694131 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:27.701844 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:27.701872 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:28.193378 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:28.201143 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:28.201186 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:28.693353 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:28.703338 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:28.703369 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:29.194101 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:29.202023 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:29.202060 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:29.693540 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:29.701253 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:29.701282 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:30.193898 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:30.201749 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:30.201781 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:30.693277 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:30.700996 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:30.701025 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:31.193550 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:31.201852 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:31.201881 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:31.694032 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:31.701644 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:31.701675 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:32.193271 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:32.201021 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:32.201056 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:32.693690 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:32.701401 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:32.701431 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:33.194001 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:33.201762 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:33.201796 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:33.693303 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:33.700856 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:33.700896 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:34.193305 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:34.201285 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:34.201319 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:34.693854 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:34.701643 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:34.701669 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:35.193311 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:35.201001 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:35.201033 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:35.693670 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:35.701476 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:35.701506 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:36.193243 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:36.201121 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:36.201152 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:36.693585 2443810 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 13:57:36.693678 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 13:57:36.732062 2443810 cri.go:89] found id: "735c4948e78b6b3f252634549a934cb5c3f274db4b800db87d148e0216b6d56c"
	I0923 13:57:36.732089 2443810 cri.go:89] found id: "b35ee50ee78941f173806cce810b028c53b33118f6dbe3fd8de895435c8e8d71"
	I0923 13:57:36.732095 2443810 cri.go:89] found id: ""
	I0923 13:57:36.732102 2443810 logs.go:276] 2 containers: [735c4948e78b6b3f252634549a934cb5c3f274db4b800db87d148e0216b6d56c b35ee50ee78941f173806cce810b028c53b33118f6dbe3fd8de895435c8e8d71]
	I0923 13:57:36.732157 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:36.735854 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:36.739227 2443810 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 13:57:36.739301 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 13:57:36.785578 2443810 cri.go:89] found id: "9b9e7ddb17db6a38a21b251e3196f1d1c22aa9b63f175a21c7fd0fc2a3557c93"
	I0923 13:57:36.785604 2443810 cri.go:89] found id: "d715cd2d4a28c02f8c80f48df080a1e0ee204be829dc156a1654860888a9cb45"
	I0923 13:57:36.785609 2443810 cri.go:89] found id: ""
	I0923 13:57:36.785615 2443810 logs.go:276] 2 containers: [9b9e7ddb17db6a38a21b251e3196f1d1c22aa9b63f175a21c7fd0fc2a3557c93 d715cd2d4a28c02f8c80f48df080a1e0ee204be829dc156a1654860888a9cb45]
	I0923 13:57:36.785672 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:36.789129 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:36.792635 2443810 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 13:57:36.792701 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 13:57:36.829270 2443810 cri.go:89] found id: ""
	I0923 13:57:36.829352 2443810 logs.go:276] 0 containers: []
	W0923 13:57:36.829376 2443810 logs.go:278] No container was found matching "coredns"
	I0923 13:57:36.829402 2443810 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 13:57:36.829491 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 13:57:36.872995 2443810 cri.go:89] found id: "8f25c8ec1dfc1c8ace2199a216025b700bc1f537d4b2e77a611d9b10d3ca1275"
	I0923 13:57:36.873016 2443810 cri.go:89] found id: "403e29736e8999e9590153934f1ae8f607fa9adf6e8628ddb325bd5c36663446"
	I0923 13:57:36.873022 2443810 cri.go:89] found id: ""
	I0923 13:57:36.873029 2443810 logs.go:276] 2 containers: [8f25c8ec1dfc1c8ace2199a216025b700bc1f537d4b2e77a611d9b10d3ca1275 403e29736e8999e9590153934f1ae8f607fa9adf6e8628ddb325bd5c36663446]
	I0923 13:57:36.873084 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:36.876720 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:36.880300 2443810 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 13:57:36.880403 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 13:57:36.918426 2443810 cri.go:89] found id: "4700c3234c1f82d3e4a97de2da541bfc26a389949b79db53d810de5bc65515eb"
	I0923 13:57:36.918451 2443810 cri.go:89] found id: ""
	I0923 13:57:36.918461 2443810 logs.go:276] 1 containers: [4700c3234c1f82d3e4a97de2da541bfc26a389949b79db53d810de5bc65515eb]
	I0923 13:57:36.918544 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:36.922193 2443810 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 13:57:36.922286 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 13:57:36.960411 2443810 cri.go:89] found id: "0002cdd2c9988549cd54c7d64e772d50966e6e3b9fc6cf3e1e8ed44e89efcfa5"
	I0923 13:57:36.960437 2443810 cri.go:89] found id: "959a1c239c5858ba2951d680c78358f30a5413306d9bc76aeba358f85044d582"
	I0923 13:57:36.960442 2443810 cri.go:89] found id: ""
	I0923 13:57:36.960450 2443810 logs.go:276] 2 containers: [0002cdd2c9988549cd54c7d64e772d50966e6e3b9fc6cf3e1e8ed44e89efcfa5 959a1c239c5858ba2951d680c78358f30a5413306d9bc76aeba358f85044d582]
	I0923 13:57:36.960531 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:36.964393 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:36.968649 2443810 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 13:57:36.968726 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 13:57:37.019500 2443810 cri.go:89] found id: "664c0dde0bba0226998803e4998ccd9eabaa62eef919f6144ed8538d34f883fc"
	I0923 13:57:37.019529 2443810 cri.go:89] found id: ""
	I0923 13:57:37.019555 2443810 logs.go:276] 1 containers: [664c0dde0bba0226998803e4998ccd9eabaa62eef919f6144ed8538d34f883fc]
	I0923 13:57:37.019685 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:37.023758 2443810 logs.go:123] Gathering logs for kubelet ...
	I0923 13:57:37.023782 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 13:57:37.104601 2443810 logs.go:123] Gathering logs for kube-apiserver [735c4948e78b6b3f252634549a934cb5c3f274db4b800db87d148e0216b6d56c] ...
	I0923 13:57:37.104639 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 735c4948e78b6b3f252634549a934cb5c3f274db4b800db87d148e0216b6d56c"
	I0923 13:57:37.160822 2443810 logs.go:123] Gathering logs for kube-controller-manager [0002cdd2c9988549cd54c7d64e772d50966e6e3b9fc6cf3e1e8ed44e89efcfa5] ...
	I0923 13:57:37.160854 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0002cdd2c9988549cd54c7d64e772d50966e6e3b9fc6cf3e1e8ed44e89efcfa5"
	I0923 13:57:37.220355 2443810 logs.go:123] Gathering logs for etcd [d715cd2d4a28c02f8c80f48df080a1e0ee204be829dc156a1654860888a9cb45] ...
	I0923 13:57:37.220389 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d715cd2d4a28c02f8c80f48df080a1e0ee204be829dc156a1654860888a9cb45"
	I0923 13:57:37.277768 2443810 logs.go:123] Gathering logs for container status ...
	I0923 13:57:37.277809 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 13:57:37.350417 2443810 logs.go:123] Gathering logs for describe nodes ...
	I0923 13:57:37.350448 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 13:57:37.672902 2443810 logs.go:123] Gathering logs for kube-apiserver [b35ee50ee78941f173806cce810b028c53b33118f6dbe3fd8de895435c8e8d71] ...
	I0923 13:57:37.672942 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b35ee50ee78941f173806cce810b028c53b33118f6dbe3fd8de895435c8e8d71"
	I0923 13:57:37.712770 2443810 logs.go:123] Gathering logs for kube-controller-manager [959a1c239c5858ba2951d680c78358f30a5413306d9bc76aeba358f85044d582] ...
	I0923 13:57:37.712809 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 959a1c239c5858ba2951d680c78358f30a5413306d9bc76aeba358f85044d582"
	I0923 13:57:37.755659 2443810 logs.go:123] Gathering logs for kindnet [664c0dde0bba0226998803e4998ccd9eabaa62eef919f6144ed8538d34f883fc] ...
	I0923 13:57:37.755689 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 664c0dde0bba0226998803e4998ccd9eabaa62eef919f6144ed8538d34f883fc"
	I0923 13:57:37.794522 2443810 logs.go:123] Gathering logs for CRI-O ...
	I0923 13:57:37.794552 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 13:57:37.866423 2443810 logs.go:123] Gathering logs for dmesg ...
	I0923 13:57:37.866464 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 13:57:37.883090 2443810 logs.go:123] Gathering logs for etcd [9b9e7ddb17db6a38a21b251e3196f1d1c22aa9b63f175a21c7fd0fc2a3557c93] ...
	I0923 13:57:37.883119 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b9e7ddb17db6a38a21b251e3196f1d1c22aa9b63f175a21c7fd0fc2a3557c93"
	I0923 13:57:37.936814 2443810 logs.go:123] Gathering logs for kube-scheduler [8f25c8ec1dfc1c8ace2199a216025b700bc1f537d4b2e77a611d9b10d3ca1275] ...
	I0923 13:57:37.936847 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f25c8ec1dfc1c8ace2199a216025b700bc1f537d4b2e77a611d9b10d3ca1275"
	I0923 13:57:38.002378 2443810 logs.go:123] Gathering logs for kube-scheduler [403e29736e8999e9590153934f1ae8f607fa9adf6e8628ddb325bd5c36663446] ...
	I0923 13:57:38.002468 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 403e29736e8999e9590153934f1ae8f607fa9adf6e8628ddb325bd5c36663446"
	I0923 13:57:38.063999 2443810 logs.go:123] Gathering logs for kube-proxy [4700c3234c1f82d3e4a97de2da541bfc26a389949b79db53d810de5bc65515eb] ...
	I0923 13:57:38.064099 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4700c3234c1f82d3e4a97de2da541bfc26a389949b79db53d810de5bc65515eb"
	I0923 13:57:40.620300 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:41.440826 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:57:41.440856 2443810 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:57:41.440898 2443810 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 13:57:41.440967 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 13:57:41.519862 2443810 cri.go:89] found id: "735c4948e78b6b3f252634549a934cb5c3f274db4b800db87d148e0216b6d56c"
	I0923 13:57:41.519883 2443810 cri.go:89] found id: "b35ee50ee78941f173806cce810b028c53b33118f6dbe3fd8de895435c8e8d71"
	I0923 13:57:41.519889 2443810 cri.go:89] found id: ""
	I0923 13:57:41.519896 2443810 logs.go:276] 2 containers: [735c4948e78b6b3f252634549a934cb5c3f274db4b800db87d148e0216b6d56c b35ee50ee78941f173806cce810b028c53b33118f6dbe3fd8de895435c8e8d71]
	I0923 13:57:41.519955 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:41.526672 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:41.530572 2443810 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 13:57:41.530660 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 13:57:41.571942 2443810 cri.go:89] found id: "9b9e7ddb17db6a38a21b251e3196f1d1c22aa9b63f175a21c7fd0fc2a3557c93"
	I0923 13:57:41.571964 2443810 cri.go:89] found id: "d715cd2d4a28c02f8c80f48df080a1e0ee204be829dc156a1654860888a9cb45"
	I0923 13:57:41.571969 2443810 cri.go:89] found id: ""
	I0923 13:57:41.571976 2443810 logs.go:276] 2 containers: [9b9e7ddb17db6a38a21b251e3196f1d1c22aa9b63f175a21c7fd0fc2a3557c93 d715cd2d4a28c02f8c80f48df080a1e0ee204be829dc156a1654860888a9cb45]
	I0923 13:57:41.572032 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:41.575731 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:41.579331 2443810 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 13:57:41.579445 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 13:57:41.630586 2443810 cri.go:89] found id: ""
	I0923 13:57:41.630612 2443810 logs.go:276] 0 containers: []
	W0923 13:57:41.630621 2443810 logs.go:278] No container was found matching "coredns"
	I0923 13:57:41.630628 2443810 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 13:57:41.630689 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 13:57:41.672794 2443810 cri.go:89] found id: "8f25c8ec1dfc1c8ace2199a216025b700bc1f537d4b2e77a611d9b10d3ca1275"
	I0923 13:57:41.672818 2443810 cri.go:89] found id: "403e29736e8999e9590153934f1ae8f607fa9adf6e8628ddb325bd5c36663446"
	I0923 13:57:41.672822 2443810 cri.go:89] found id: ""
	I0923 13:57:41.672830 2443810 logs.go:276] 2 containers: [8f25c8ec1dfc1c8ace2199a216025b700bc1f537d4b2e77a611d9b10d3ca1275 403e29736e8999e9590153934f1ae8f607fa9adf6e8628ddb325bd5c36663446]
	I0923 13:57:41.672887 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:41.676721 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:41.680226 2443810 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 13:57:41.680307 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 13:57:41.727474 2443810 cri.go:89] found id: "4700c3234c1f82d3e4a97de2da541bfc26a389949b79db53d810de5bc65515eb"
	I0923 13:57:41.727546 2443810 cri.go:89] found id: ""
	I0923 13:57:41.727569 2443810 logs.go:276] 1 containers: [4700c3234c1f82d3e4a97de2da541bfc26a389949b79db53d810de5bc65515eb]
	I0923 13:57:41.727661 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:41.733911 2443810 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 13:57:41.734041 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 13:57:41.793570 2443810 cri.go:89] found id: "0002cdd2c9988549cd54c7d64e772d50966e6e3b9fc6cf3e1e8ed44e89efcfa5"
	I0923 13:57:41.793599 2443810 cri.go:89] found id: "959a1c239c5858ba2951d680c78358f30a5413306d9bc76aeba358f85044d582"
	I0923 13:57:41.793605 2443810 cri.go:89] found id: ""
	I0923 13:57:41.793617 2443810 logs.go:276] 2 containers: [0002cdd2c9988549cd54c7d64e772d50966e6e3b9fc6cf3e1e8ed44e89efcfa5 959a1c239c5858ba2951d680c78358f30a5413306d9bc76aeba358f85044d582]
	I0923 13:57:41.793679 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:41.797578 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:41.801291 2443810 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 13:57:41.801369 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 13:57:41.841217 2443810 cri.go:89] found id: "664c0dde0bba0226998803e4998ccd9eabaa62eef919f6144ed8538d34f883fc"
	I0923 13:57:41.841238 2443810 cri.go:89] found id: ""
	I0923 13:57:41.841245 2443810 logs.go:276] 1 containers: [664c0dde0bba0226998803e4998ccd9eabaa62eef919f6144ed8538d34f883fc]
	I0923 13:57:41.841299 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:41.845027 2443810 logs.go:123] Gathering logs for kubelet ...
	I0923 13:57:41.845052 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 13:57:41.928712 2443810 logs.go:123] Gathering logs for kube-apiserver [b35ee50ee78941f173806cce810b028c53b33118f6dbe3fd8de895435c8e8d71] ...
	I0923 13:57:41.928751 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b35ee50ee78941f173806cce810b028c53b33118f6dbe3fd8de895435c8e8d71"
	I0923 13:57:41.975927 2443810 logs.go:123] Gathering logs for kube-scheduler [403e29736e8999e9590153934f1ae8f607fa9adf6e8628ddb325bd5c36663446] ...
	I0923 13:57:41.975960 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 403e29736e8999e9590153934f1ae8f607fa9adf6e8628ddb325bd5c36663446"
	I0923 13:57:42.035637 2443810 logs.go:123] Gathering logs for container status ...
	I0923 13:57:42.035668 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 13:57:42.139594 2443810 logs.go:123] Gathering logs for dmesg ...
	I0923 13:57:42.139701 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 13:57:42.165994 2443810 logs.go:123] Gathering logs for kube-apiserver [735c4948e78b6b3f252634549a934cb5c3f274db4b800db87d148e0216b6d56c] ...
	I0923 13:57:42.166065 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 735c4948e78b6b3f252634549a934cb5c3f274db4b800db87d148e0216b6d56c"
	I0923 13:57:42.247738 2443810 logs.go:123] Gathering logs for etcd [9b9e7ddb17db6a38a21b251e3196f1d1c22aa9b63f175a21c7fd0fc2a3557c93] ...
	I0923 13:57:42.247813 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b9e7ddb17db6a38a21b251e3196f1d1c22aa9b63f175a21c7fd0fc2a3557c93"
	I0923 13:57:42.315296 2443810 logs.go:123] Gathering logs for kube-scheduler [8f25c8ec1dfc1c8ace2199a216025b700bc1f537d4b2e77a611d9b10d3ca1275] ...
	I0923 13:57:42.315382 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f25c8ec1dfc1c8ace2199a216025b700bc1f537d4b2e77a611d9b10d3ca1275"
	I0923 13:57:42.374574 2443810 logs.go:123] Gathering logs for kube-controller-manager [0002cdd2c9988549cd54c7d64e772d50966e6e3b9fc6cf3e1e8ed44e89efcfa5] ...
	I0923 13:57:42.374600 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0002cdd2c9988549cd54c7d64e772d50966e6e3b9fc6cf3e1e8ed44e89efcfa5"
	I0923 13:57:42.455096 2443810 logs.go:123] Gathering logs for describe nodes ...
	I0923 13:57:42.455185 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 13:57:42.779741 2443810 logs.go:123] Gathering logs for etcd [d715cd2d4a28c02f8c80f48df080a1e0ee204be829dc156a1654860888a9cb45] ...
	I0923 13:57:42.779828 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d715cd2d4a28c02f8c80f48df080a1e0ee204be829dc156a1654860888a9cb45"
	I0923 13:57:42.847485 2443810 logs.go:123] Gathering logs for kube-controller-manager [959a1c239c5858ba2951d680c78358f30a5413306d9bc76aeba358f85044d582] ...
	I0923 13:57:42.847523 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 959a1c239c5858ba2951d680c78358f30a5413306d9bc76aeba358f85044d582"
	I0923 13:57:42.897059 2443810 logs.go:123] Gathering logs for kindnet [664c0dde0bba0226998803e4998ccd9eabaa62eef919f6144ed8538d34f883fc] ...
	I0923 13:57:42.897094 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 664c0dde0bba0226998803e4998ccd9eabaa62eef919f6144ed8538d34f883fc"
	I0923 13:57:42.945175 2443810 logs.go:123] Gathering logs for CRI-O ...
	I0923 13:57:42.945207 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 13:57:43.037009 2443810 logs.go:123] Gathering logs for kube-proxy [4700c3234c1f82d3e4a97de2da541bfc26a389949b79db53d810de5bc65515eb] ...
	I0923 13:57:43.037050 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4700c3234c1f82d3e4a97de2da541bfc26a389949b79db53d810de5bc65515eb"
	I0923 13:57:45.599278 2443810 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 13:57:45.609122 2443810 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0923 13:57:45.609201 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0923 13:57:45.609210 2443810 round_trippers.go:469] Request Headers:
	I0923 13:57:45.609220 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:57:45.609225 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:57:45.623001 2443810 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0923 13:57:45.623142 2443810 api_server.go:141] control plane version: v1.31.1
	I0923 13:57:45.623162 2443810 api_server.go:131] duration metric: took 43.93000093s to wait for apiserver health ...
	I0923 13:57:45.623181 2443810 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 13:57:45.623219 2443810 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 13:57:45.623285 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 13:57:45.661113 2443810 cri.go:89] found id: "735c4948e78b6b3f252634549a934cb5c3f274db4b800db87d148e0216b6d56c"
	I0923 13:57:45.661143 2443810 cri.go:89] found id: "b35ee50ee78941f173806cce810b028c53b33118f6dbe3fd8de895435c8e8d71"
	I0923 13:57:45.661155 2443810 cri.go:89] found id: ""
	I0923 13:57:45.661163 2443810 logs.go:276] 2 containers: [735c4948e78b6b3f252634549a934cb5c3f274db4b800db87d148e0216b6d56c b35ee50ee78941f173806cce810b028c53b33118f6dbe3fd8de895435c8e8d71]
	I0923 13:57:45.661228 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:45.664798 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:45.668125 2443810 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 13:57:45.668221 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 13:57:45.715419 2443810 cri.go:89] found id: "9b9e7ddb17db6a38a21b251e3196f1d1c22aa9b63f175a21c7fd0fc2a3557c93"
	I0923 13:57:45.715440 2443810 cri.go:89] found id: "d715cd2d4a28c02f8c80f48df080a1e0ee204be829dc156a1654860888a9cb45"
	I0923 13:57:45.715446 2443810 cri.go:89] found id: ""
	I0923 13:57:45.715453 2443810 logs.go:276] 2 containers: [9b9e7ddb17db6a38a21b251e3196f1d1c22aa9b63f175a21c7fd0fc2a3557c93 d715cd2d4a28c02f8c80f48df080a1e0ee204be829dc156a1654860888a9cb45]
	I0923 13:57:45.715578 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:45.719436 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:45.723066 2443810 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 13:57:45.723142 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 13:57:45.763936 2443810 cri.go:89] found id: ""
	I0923 13:57:45.763961 2443810 logs.go:276] 0 containers: []
	W0923 13:57:45.763970 2443810 logs.go:278] No container was found matching "coredns"
	I0923 13:57:45.763983 2443810 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 13:57:45.764046 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 13:57:45.811780 2443810 cri.go:89] found id: "8f25c8ec1dfc1c8ace2199a216025b700bc1f537d4b2e77a611d9b10d3ca1275"
	I0923 13:57:45.811803 2443810 cri.go:89] found id: "403e29736e8999e9590153934f1ae8f607fa9adf6e8628ddb325bd5c36663446"
	I0923 13:57:45.811808 2443810 cri.go:89] found id: ""
	I0923 13:57:45.811816 2443810 logs.go:276] 2 containers: [8f25c8ec1dfc1c8ace2199a216025b700bc1f537d4b2e77a611d9b10d3ca1275 403e29736e8999e9590153934f1ae8f607fa9adf6e8628ddb325bd5c36663446]
	I0923 13:57:45.811876 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:45.815772 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:45.819203 2443810 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 13:57:45.819275 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 13:57:45.859656 2443810 cri.go:89] found id: "4700c3234c1f82d3e4a97de2da541bfc26a389949b79db53d810de5bc65515eb"
	I0923 13:57:45.859680 2443810 cri.go:89] found id: ""
	I0923 13:57:45.859688 2443810 logs.go:276] 1 containers: [4700c3234c1f82d3e4a97de2da541bfc26a389949b79db53d810de5bc65515eb]
	I0923 13:57:45.859746 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:45.864251 2443810 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 13:57:45.864372 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 13:57:45.907018 2443810 cri.go:89] found id: "0002cdd2c9988549cd54c7d64e772d50966e6e3b9fc6cf3e1e8ed44e89efcfa5"
	I0923 13:57:45.907039 2443810 cri.go:89] found id: "959a1c239c5858ba2951d680c78358f30a5413306d9bc76aeba358f85044d582"
	I0923 13:57:45.907044 2443810 cri.go:89] found id: ""
	I0923 13:57:45.907051 2443810 logs.go:276] 2 containers: [0002cdd2c9988549cd54c7d64e772d50966e6e3b9fc6cf3e1e8ed44e89efcfa5 959a1c239c5858ba2951d680c78358f30a5413306d9bc76aeba358f85044d582]
	I0923 13:57:45.907109 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:45.910934 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:45.914579 2443810 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 13:57:45.914650 2443810 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 13:57:45.952775 2443810 cri.go:89] found id: "664c0dde0bba0226998803e4998ccd9eabaa62eef919f6144ed8538d34f883fc"
	I0923 13:57:45.952811 2443810 cri.go:89] found id: ""
	I0923 13:57:45.952819 2443810 logs.go:276] 1 containers: [664c0dde0bba0226998803e4998ccd9eabaa62eef919f6144ed8538d34f883fc]
	I0923 13:57:45.952874 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:45.956366 2443810 logs.go:123] Gathering logs for kubelet ...
	I0923 13:57:45.956392 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 13:57:46.037262 2443810 logs.go:123] Gathering logs for etcd [9b9e7ddb17db6a38a21b251e3196f1d1c22aa9b63f175a21c7fd0fc2a3557c93] ...
	I0923 13:57:46.037305 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b9e7ddb17db6a38a21b251e3196f1d1c22aa9b63f175a21c7fd0fc2a3557c93"
	I0923 13:57:46.087993 2443810 logs.go:123] Gathering logs for kube-scheduler [403e29736e8999e9590153934f1ae8f607fa9adf6e8628ddb325bd5c36663446] ...
	I0923 13:57:46.088025 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 403e29736e8999e9590153934f1ae8f607fa9adf6e8628ddb325bd5c36663446"
	I0923 13:57:46.126726 2443810 logs.go:123] Gathering logs for kube-controller-manager [0002cdd2c9988549cd54c7d64e772d50966e6e3b9fc6cf3e1e8ed44e89efcfa5] ...
	I0923 13:57:46.126757 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0002cdd2c9988549cd54c7d64e772d50966e6e3b9fc6cf3e1e8ed44e89efcfa5"
	I0923 13:57:46.190892 2443810 logs.go:123] Gathering logs for container status ...
	I0923 13:57:46.190949 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 13:57:46.240271 2443810 logs.go:123] Gathering logs for dmesg ...
	I0923 13:57:46.240299 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 13:57:46.256450 2443810 logs.go:123] Gathering logs for kube-apiserver [735c4948e78b6b3f252634549a934cb5c3f274db4b800db87d148e0216b6d56c] ...
	I0923 13:57:46.256481 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 735c4948e78b6b3f252634549a934cb5c3f274db4b800db87d148e0216b6d56c"
	I0923 13:57:46.303377 2443810 logs.go:123] Gathering logs for kube-apiserver [b35ee50ee78941f173806cce810b028c53b33118f6dbe3fd8de895435c8e8d71] ...
	I0923 13:57:46.303489 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b35ee50ee78941f173806cce810b028c53b33118f6dbe3fd8de895435c8e8d71"
	I0923 13:57:46.340928 2443810 logs.go:123] Gathering logs for kube-controller-manager [959a1c239c5858ba2951d680c78358f30a5413306d9bc76aeba358f85044d582] ...
	I0923 13:57:46.341005 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 959a1c239c5858ba2951d680c78358f30a5413306d9bc76aeba358f85044d582"
	I0923 13:57:46.377294 2443810 logs.go:123] Gathering logs for describe nodes ...
	I0923 13:57:46.377322 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 13:57:46.635502 2443810 logs.go:123] Gathering logs for etcd [d715cd2d4a28c02f8c80f48df080a1e0ee204be829dc156a1654860888a9cb45] ...
	I0923 13:57:46.635537 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d715cd2d4a28c02f8c80f48df080a1e0ee204be829dc156a1654860888a9cb45"
	I0923 13:57:46.709184 2443810 logs.go:123] Gathering logs for kube-scheduler [8f25c8ec1dfc1c8ace2199a216025b700bc1f537d4b2e77a611d9b10d3ca1275] ...
	I0923 13:57:46.709220 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f25c8ec1dfc1c8ace2199a216025b700bc1f537d4b2e77a611d9b10d3ca1275"
	I0923 13:57:46.772057 2443810 logs.go:123] Gathering logs for kube-proxy [4700c3234c1f82d3e4a97de2da541bfc26a389949b79db53d810de5bc65515eb] ...
	I0923 13:57:46.772087 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4700c3234c1f82d3e4a97de2da541bfc26a389949b79db53d810de5bc65515eb"
	I0923 13:57:46.843303 2443810 logs.go:123] Gathering logs for kindnet [664c0dde0bba0226998803e4998ccd9eabaa62eef919f6144ed8538d34f883fc] ...
	I0923 13:57:46.843335 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 664c0dde0bba0226998803e4998ccd9eabaa62eef919f6144ed8538d34f883fc"
	I0923 13:57:46.908169 2443810 logs.go:123] Gathering logs for CRI-O ...
	I0923 13:57:46.908205 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 13:57:49.490524 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0923 13:57:49.490550 2443810 round_trippers.go:469] Request Headers:
	I0923 13:57:49.490570 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:57:49.490580 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:57:49.498247 2443810 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 13:57:49.509474 2443810 system_pods.go:59] 19 kube-system pods found
	I0923 13:57:49.509522 2443810 system_pods.go:61] "coredns-7c65d6cfc9-9sjjq" [c84ea89a-3451-4cc8-9b9b-86c6d9b1de63] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0923 13:57:49.509532 2443810 system_pods.go:61] "coredns-7c65d6cfc9-zwchv" [98cc6d43-f9da-4347-be5a-b425b0a01d05] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0923 13:57:49.509540 2443810 system_pods.go:61] "etcd-ha-952506" [3bab6fe9-6142-4b8d-ab5e-c8eb2c1058a9] Running
	I0923 13:57:49.509545 2443810 system_pods.go:61] "etcd-ha-952506-m02" [b016446a-1092-4e56-8fe4-25c8ca3f44c6] Running
	I0923 13:57:49.509549 2443810 system_pods.go:61] "kindnet-26stp" [8f400e90-e838-4eef-add3-892cab8653ca] Running
	I0923 13:57:49.509553 2443810 system_pods.go:61] "kindnet-bnkzg" [1bb1b0ec-99bf-4894-8e29-dd5c1abd2470] Running
	I0923 13:57:49.509557 2443810 system_pods.go:61] "kindnet-f4gmw" [32fa42df-a2e8-45ae-93a7-f18d5d85c3e0] Running
	I0923 13:57:49.509564 2443810 system_pods.go:61] "kube-apiserver-ha-952506" [722add7c-90ca-430a-a520-298ffe80bef8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0923 13:57:49.509575 2443810 system_pods.go:61] "kube-apiserver-ha-952506-m02" [47f73bad-ade8-46c9-8c65-bbd1167e250e] Running
	I0923 13:57:49.509584 2443810 system_pods.go:61] "kube-controller-manager-ha-952506" [e2b4fb14-3d43-4c93-a783-8d9211cd3690] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0923 13:57:49.509594 2443810 system_pods.go:61] "kube-controller-manager-ha-952506-m02" [6c23ae2a-d4e9-44f9-b55d-35a00b249957] Running
	I0923 13:57:49.509599 2443810 system_pods.go:61] "kube-proxy-9w2p2" [d756d219-8d06-4fa9-a622-f1780898d303] Running
	I0923 13:57:49.509603 2443810 system_pods.go:61] "kube-proxy-qqlbp" [edb8f87a-2ba1-4f6f-8d99-fd12111a833e] Running
	I0923 13:57:49.509609 2443810 system_pods.go:61] "kube-proxy-s598q" [0c839fda-b978-4de8-a314-81087f4ea0bf] Running
	I0923 13:57:49.509613 2443810 system_pods.go:61] "kube-scheduler-ha-952506" [3efb253c-eee6-48a6-964a-af4a2ced1232] Running
	I0923 13:57:49.509620 2443810 system_pods.go:61] "kube-scheduler-ha-952506-m02" [2526f125-d2a5-49cf-a4de-73fa0c45f5e8] Running
	I0923 13:57:49.509624 2443810 system_pods.go:61] "kube-vip-ha-952506" [73f53c61-d542-4228-ab4a-7f5133724873] Running
	I0923 13:57:49.509628 2443810 system_pods.go:61] "kube-vip-ha-952506-m02" [6314f7cd-040b-4222-bafa-a2cba7b4a6d7] Running
	I0923 13:57:49.509631 2443810 system_pods.go:61] "storage-provisioner" [f1b38ae5-4f91-4ae9-8190-e1e8b488fc9e] Running
	I0923 13:57:49.509640 2443810 system_pods.go:74] duration metric: took 3.886448326s to wait for pod list to return data ...
	I0923 13:57:49.509648 2443810 default_sa.go:34] waiting for default service account to be created ...
	I0923 13:57:49.509743 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0923 13:57:49.509753 2443810 round_trippers.go:469] Request Headers:
	I0923 13:57:49.509762 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:57:49.509766 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:57:49.514365 2443810 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:57:49.514593 2443810 default_sa.go:45] found service account: "default"
	I0923 13:57:49.514612 2443810 default_sa.go:55] duration metric: took 4.956132ms for default service account to be created ...
	I0923 13:57:49.514622 2443810 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 13:57:49.514714 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0923 13:57:49.514730 2443810 round_trippers.go:469] Request Headers:
	I0923 13:57:49.514741 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:57:49.514745 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:57:49.597082 2443810 round_trippers.go:574] Response Status: 200 OK in 82 milliseconds
	I0923 13:57:49.605835 2443810 system_pods.go:86] 19 kube-system pods found
	I0923 13:57:49.605872 2443810 system_pods.go:89] "coredns-7c65d6cfc9-9sjjq" [c84ea89a-3451-4cc8-9b9b-86c6d9b1de63] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0923 13:57:49.605883 2443810 system_pods.go:89] "coredns-7c65d6cfc9-zwchv" [98cc6d43-f9da-4347-be5a-b425b0a01d05] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0923 13:57:49.605890 2443810 system_pods.go:89] "etcd-ha-952506" [3bab6fe9-6142-4b8d-ab5e-c8eb2c1058a9] Running
	I0923 13:57:49.605896 2443810 system_pods.go:89] "etcd-ha-952506-m02" [b016446a-1092-4e56-8fe4-25c8ca3f44c6] Running
	I0923 13:57:49.605902 2443810 system_pods.go:89] "kindnet-26stp" [8f400e90-e838-4eef-add3-892cab8653ca] Running
	I0923 13:57:49.605906 2443810 system_pods.go:89] "kindnet-bnkzg" [1bb1b0ec-99bf-4894-8e29-dd5c1abd2470] Running
	I0923 13:57:49.605919 2443810 system_pods.go:89] "kindnet-f4gmw" [32fa42df-a2e8-45ae-93a7-f18d5d85c3e0] Running
	I0923 13:57:49.605926 2443810 system_pods.go:89] "kube-apiserver-ha-952506" [722add7c-90ca-430a-a520-298ffe80bef8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0923 13:57:49.605934 2443810 system_pods.go:89] "kube-apiserver-ha-952506-m02" [47f73bad-ade8-46c9-8c65-bbd1167e250e] Running
	I0923 13:57:49.605942 2443810 system_pods.go:89] "kube-controller-manager-ha-952506" [e2b4fb14-3d43-4c93-a783-8d9211cd3690] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0923 13:57:49.605947 2443810 system_pods.go:89] "kube-controller-manager-ha-952506-m02" [6c23ae2a-d4e9-44f9-b55d-35a00b249957] Running
	I0923 13:57:49.605954 2443810 system_pods.go:89] "kube-proxy-9w2p2" [d756d219-8d06-4fa9-a622-f1780898d303] Running
	I0923 13:57:49.605959 2443810 system_pods.go:89] "kube-proxy-qqlbp" [edb8f87a-2ba1-4f6f-8d99-fd12111a833e] Running
	I0923 13:57:49.605965 2443810 system_pods.go:89] "kube-proxy-s598q" [0c839fda-b978-4de8-a314-81087f4ea0bf] Running
	I0923 13:57:49.605971 2443810 system_pods.go:89] "kube-scheduler-ha-952506" [3efb253c-eee6-48a6-964a-af4a2ced1232] Running
	I0923 13:57:49.605979 2443810 system_pods.go:89] "kube-scheduler-ha-952506-m02" [2526f125-d2a5-49cf-a4de-73fa0c45f5e8] Running
	I0923 13:57:49.605984 2443810 system_pods.go:89] "kube-vip-ha-952506" [73f53c61-d542-4228-ab4a-7f5133724873] Running
	I0923 13:57:49.605987 2443810 system_pods.go:89] "kube-vip-ha-952506-m02" [6314f7cd-040b-4222-bafa-a2cba7b4a6d7] Running
	I0923 13:57:49.605991 2443810 system_pods.go:89] "storage-provisioner" [f1b38ae5-4f91-4ae9-8190-e1e8b488fc9e] Running
	I0923 13:57:49.605998 2443810 system_pods.go:126] duration metric: took 91.370845ms to wait for k8s-apps to be running ...
	I0923 13:57:49.606012 2443810 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 13:57:49.606071 2443810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:57:49.620376 2443810 system_svc.go:56] duration metric: took 14.354653ms WaitForService to wait for kubelet
	I0923 13:57:49.620457 2443810 kubeadm.go:582] duration metric: took 1m13.099126249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:57:49.620492 2443810 node_conditions.go:102] verifying NodePressure condition ...
	I0923 13:57:49.620607 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0923 13:57:49.620633 2443810 round_trippers.go:469] Request Headers:
	I0923 13:57:49.620655 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:57:49.620677 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:57:49.623739 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:57:49.625825 2443810 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0923 13:57:49.625904 2443810 node_conditions.go:123] node cpu capacity is 2
	I0923 13:57:49.625930 2443810 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0923 13:57:49.625951 2443810 node_conditions.go:123] node cpu capacity is 2
	I0923 13:57:49.625980 2443810 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0923 13:57:49.626005 2443810 node_conditions.go:123] node cpu capacity is 2
	I0923 13:57:49.626025 2443810 node_conditions.go:105] duration metric: took 5.500513ms to run NodePressure ...
	I0923 13:57:49.626052 2443810 start.go:241] waiting for startup goroutines ...
	I0923 13:57:49.626097 2443810 start.go:255] writing updated cluster config ...
	I0923 13:57:49.629639 2443810 out.go:201] 
	I0923 13:57:49.632708 2443810 config.go:182] Loaded profile config "ha-952506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:57:49.632863 2443810 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/config.json ...
	I0923 13:57:49.636052 2443810 out.go:177] * Starting "ha-952506-m04" worker node in "ha-952506" cluster
	I0923 13:57:49.639269 2443810 cache.go:121] Beginning downloading kic base image for docker with crio
	I0923 13:57:49.641915 2443810 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 13:57:49.644440 2443810 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:57:49.644474 2443810 cache.go:56] Caching tarball of preloaded images
	I0923 13:57:49.644521 2443810 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 13:57:49.644615 2443810 preload.go:172] Found /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0923 13:57:49.644627 2443810 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 13:57:49.644779 2443810 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/config.json ...
	I0923 13:57:49.663014 2443810 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon, skipping pull
	I0923 13:57:49.663040 2443810 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in daemon, skipping load
	I0923 13:57:49.663059 2443810 cache.go:194] Successfully downloaded all kic artifacts
	I0923 13:57:49.663085 2443810 start.go:360] acquireMachinesLock for ha-952506-m04: {Name:mk1fbeb745d6a2b752d229a0696a35ef8a62cb57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:57:49.663142 2443810 start.go:364] duration metric: took 37.8µs to acquireMachinesLock for "ha-952506-m04"
	I0923 13:57:49.663167 2443810 start.go:96] Skipping create...Using existing machine configuration
	I0923 13:57:49.663176 2443810 fix.go:54] fixHost starting: m04
	I0923 13:57:49.663445 2443810 cli_runner.go:164] Run: docker container inspect ha-952506-m04 --format={{.State.Status}}
	I0923 13:57:49.679264 2443810 fix.go:112] recreateIfNeeded on ha-952506-m04: state=Stopped err=<nil>
	W0923 13:57:49.679295 2443810 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 13:57:49.682245 2443810 out.go:177] * Restarting existing docker container for "ha-952506-m04" ...
	I0923 13:57:49.684876 2443810 cli_runner.go:164] Run: docker start ha-952506-m04
	I0923 13:57:49.986775 2443810 cli_runner.go:164] Run: docker container inspect ha-952506-m04 --format={{.State.Status}}
	I0923 13:57:50.012964 2443810 kic.go:430] container "ha-952506-m04" state is running.
	I0923 13:57:50.013756 2443810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-952506-m04
	I0923 13:57:50.037436 2443810 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/config.json ...
	I0923 13:57:50.038497 2443810 machine.go:93] provisionDockerMachine start ...
	I0923 13:57:50.038797 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506-m04
	I0923 13:57:50.059583 2443810 main.go:141] libmachine: Using SSH client type: native
	I0923 13:57:50.059843 2443810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35804 <nil> <nil>}
	I0923 13:57:50.059855 2443810 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:57:50.060743 2443810 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0923 13:57:53.197716 2443810 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-952506-m04
	
	I0923 13:57:53.197742 2443810 ubuntu.go:169] provisioning hostname "ha-952506-m04"
	I0923 13:57:53.197811 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506-m04
	I0923 13:57:53.214814 2443810 main.go:141] libmachine: Using SSH client type: native
	I0923 13:57:53.215063 2443810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35804 <nil> <nil>}
	I0923 13:57:53.215082 2443810 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-952506-m04 && echo "ha-952506-m04" | sudo tee /etc/hostname
	I0923 13:57:53.366161 2443810 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-952506-m04
	
	I0923 13:57:53.366254 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506-m04
	I0923 13:57:53.385463 2443810 main.go:141] libmachine: Using SSH client type: native
	I0923 13:57:53.385719 2443810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35804 <nil> <nil>}
	I0923 13:57:53.385744 2443810 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-952506-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-952506-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-952506-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:57:53.522291 2443810 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:57:53.522353 2443810 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19690-2377681/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-2377681/.minikube}
	I0923 13:57:53.522375 2443810 ubuntu.go:177] setting up certificates
	I0923 13:57:53.522385 2443810 provision.go:84] configureAuth start
	I0923 13:57:53.522444 2443810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-952506-m04
	I0923 13:57:53.538636 2443810 provision.go:143] copyHostCerts
	I0923 13:57:53.538682 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19690-2377681/.minikube/cert.pem
	I0923 13:57:53.538715 2443810 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-2377681/.minikube/cert.pem, removing ...
	I0923 13:57:53.538728 2443810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-2377681/.minikube/cert.pem
	I0923 13:57:53.538805 2443810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-2377681/.minikube/cert.pem (1123 bytes)
	I0923 13:57:53.538888 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19690-2377681/.minikube/key.pem
	I0923 13:57:53.538910 2443810 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-2377681/.minikube/key.pem, removing ...
	I0923 13:57:53.538915 2443810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-2377681/.minikube/key.pem
	I0923 13:57:53.538952 2443810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-2377681/.minikube/key.pem (1679 bytes)
	I0923 13:57:53.538998 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.pem
	I0923 13:57:53.539018 2443810 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.pem, removing ...
	I0923 13:57:53.539025 2443810 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.pem
	I0923 13:57:53.539049 2443810 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.pem (1078 bytes)
	I0923 13:57:53.539100 2443810 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca-key.pem org=jenkins.ha-952506-m04 san=[127.0.0.1 192.168.49.5 ha-952506-m04 localhost minikube]
	I0923 13:57:54.928135 2443810 provision.go:177] copyRemoteCerts
	I0923 13:57:54.928206 2443810 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:57:54.928258 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506-m04
	I0923 13:57:54.945599 2443810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35804 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/ha-952506-m04/id_rsa Username:docker}
	I0923 13:57:55.052324 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 13:57:55.052407 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 13:57:55.082910 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 13:57:55.082980 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 13:57:55.108749 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 13:57:55.108815 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 13:57:55.142373 2443810 provision.go:87] duration metric: took 1.619974259s to configureAuth
	I0923 13:57:55.142400 2443810 ubuntu.go:193] setting minikube options for container-runtime
	I0923 13:57:55.142639 2443810 config.go:182] Loaded profile config "ha-952506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:57:55.142759 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506-m04
	I0923 13:57:55.160930 2443810 main.go:141] libmachine: Using SSH client type: native
	I0923 13:57:55.161264 2443810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 35804 <nil> <nil>}
	I0923 13:57:55.161284 2443810 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 13:57:55.433777 2443810 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 13:57:55.433799 2443810 machine.go:96] duration metric: took 5.395263952s to provisionDockerMachine
	I0923 13:57:55.433810 2443810 start.go:293] postStartSetup for "ha-952506-m04" (driver="docker")
	I0923 13:57:55.433830 2443810 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:57:55.433905 2443810 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:57:55.433954 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506-m04
	I0923 13:57:55.452838 2443810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35804 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/ha-952506-m04/id_rsa Username:docker}
	I0923 13:57:55.555293 2443810 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:57:55.559078 2443810 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 13:57:55.559115 2443810 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 13:57:55.559126 2443810 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 13:57:55.559133 2443810 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 13:57:55.559145 2443810 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-2377681/.minikube/addons for local assets ...
	I0923 13:57:55.559216 2443810 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-2377681/.minikube/files for local assets ...
	I0923 13:57:55.559309 2443810 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-2377681/.minikube/files/etc/ssl/certs/23830702.pem -> 23830702.pem in /etc/ssl/certs
	I0923 13:57:55.559321 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/files/etc/ssl/certs/23830702.pem -> /etc/ssl/certs/23830702.pem
	I0923 13:57:55.559421 2443810 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 13:57:55.568558 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/files/etc/ssl/certs/23830702.pem --> /etc/ssl/certs/23830702.pem (1708 bytes)
	I0923 13:57:55.593384 2443810 start.go:296] duration metric: took 159.557644ms for postStartSetup
	I0923 13:57:55.593469 2443810 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:57:55.593516 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506-m04
	I0923 13:57:55.610282 2443810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35804 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/ha-952506-m04/id_rsa Username:docker}
	I0923 13:57:55.703474 2443810 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 13:57:55.721457 2443810 fix.go:56] duration metric: took 6.05827297s for fixHost
	I0923 13:57:55.721527 2443810 start.go:83] releasing machines lock for "ha-952506-m04", held for 6.058371339s
	I0923 13:57:55.721635 2443810 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-952506-m04
	I0923 13:57:55.746335 2443810 out.go:177] * Found network options:
	I0923 13:57:55.748976 2443810 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0923 13:57:55.752118 2443810 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 13:57:55.752148 2443810 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 13:57:55.752174 2443810 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 13:57:55.752184 2443810 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 13:57:55.752257 2443810 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 13:57:55.752304 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506-m04
	I0923 13:57:55.752593 2443810 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 13:57:55.752655 2443810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506-m04
	I0923 13:57:55.776870 2443810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35804 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/ha-952506-m04/id_rsa Username:docker}
	I0923 13:57:55.784368 2443810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35804 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/ha-952506-m04/id_rsa Username:docker}
	I0923 13:57:56.042776 2443810 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 13:57:56.047959 2443810 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:57:56.057175 2443810 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0923 13:57:56.057261 2443810 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:57:56.066373 2443810 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 13:57:56.066399 2443810 start.go:495] detecting cgroup driver to use...
	I0923 13:57:56.066433 2443810 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 13:57:56.066485 2443810 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 13:57:56.078998 2443810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:57:56.090696 2443810 docker.go:217] disabling cri-docker service (if available) ...
	I0923 13:57:56.090772 2443810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 13:57:56.104965 2443810 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 13:57:56.117028 2443810 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 13:57:56.220160 2443810 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 13:57:56.311677 2443810 docker.go:233] disabling docker service ...
	I0923 13:57:56.311803 2443810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 13:57:56.328731 2443810 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 13:57:56.344530 2443810 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 13:57:56.457771 2443810 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 13:57:56.552348 2443810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 13:57:56.567262 2443810 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:57:56.585602 2443810 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 13:57:56.585681 2443810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:57:56.597187 2443810 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 13:57:56.597275 2443810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:57:56.609914 2443810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:57:56.620395 2443810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:57:56.630934 2443810 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:57:56.640825 2443810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:57:56.651177 2443810 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:57:56.661627 2443810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 13:57:56.679870 2443810 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:57:56.688823 2443810 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:57:56.697940 2443810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:57:56.796557 2443810 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 13:57:56.939092 2443810 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 13:57:56.939217 2443810 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 13:57:56.949504 2443810 start.go:563] Will wait 60s for crictl version
	I0923 13:57:56.949618 2443810 ssh_runner.go:195] Run: which crictl
	I0923 13:57:56.953398 2443810 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:57:57.000225 2443810 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0923 13:57:57.000374 2443810 ssh_runner.go:195] Run: crio --version
	I0923 13:57:57.053183 2443810 ssh_runner.go:195] Run: crio --version
	I0923 13:57:57.102879 2443810 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0923 13:57:57.105707 2443810 out.go:177]   - env NO_PROXY=192.168.49.2
	I0923 13:57:57.108178 2443810 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0923 13:57:57.110645 2443810 cli_runner.go:164] Run: docker network inspect ha-952506 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 13:57:57.133470 2443810 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 13:57:57.138236 2443810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:57:57.150151 2443810 mustload.go:65] Loading cluster: ha-952506
	I0923 13:57:57.150488 2443810 config.go:182] Loaded profile config "ha-952506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:57:57.150753 2443810 cli_runner.go:164] Run: docker container inspect ha-952506 --format={{.State.Status}}
	I0923 13:57:57.167368 2443810 host.go:66] Checking if "ha-952506" exists ...
	I0923 13:57:57.167676 2443810 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506 for IP: 192.168.49.5
	I0923 13:57:57.167691 2443810 certs.go:194] generating shared ca certs ...
	I0923 13:57:57.167706 2443810 certs.go:226] acquiring lock for ca certs: {Name:mka74fca5f9586bfec26165232a0abe6b9527b85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:57:57.167836 2443810 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key
	I0923 13:57:57.167892 2443810 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key
	I0923 13:57:57.167909 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 13:57:57.167925 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 13:57:57.167947 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 13:57:57.167958 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 13:57:57.168016 2443810 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/2383070.pem (1338 bytes)
	W0923 13:57:57.168048 2443810 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/2383070_empty.pem, impossibly tiny 0 bytes
	I0923 13:57:57.168059 2443810 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 13:57:57.168087 2443810 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/ca.pem (1078 bytes)
	I0923 13:57:57.168114 2443810 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/cert.pem (1123 bytes)
	I0923 13:57:57.168138 2443810 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/key.pem (1679 bytes)
	I0923 13:57:57.168186 2443810 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-2377681/.minikube/files/etc/ssl/certs/23830702.pem (1708 bytes)
	I0923 13:57:57.168219 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/2383070.pem -> /usr/share/ca-certificates/2383070.pem
	I0923 13:57:57.168235 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/files/etc/ssl/certs/23830702.pem -> /usr/share/ca-certificates/23830702.pem
	I0923 13:57:57.168252 2443810 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:57:57.168268 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:57:57.195185 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 13:57:57.222509 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:57:57.250846 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:57:57.281505 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/certs/2383070.pem --> /usr/share/ca-certificates/2383070.pem (1338 bytes)
	I0923 13:57:57.307515 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/files/etc/ssl/certs/23830702.pem --> /usr/share/ca-certificates/23830702.pem (1708 bytes)
	I0923 13:57:57.335260 2443810 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:57:57.360736 2443810 ssh_runner.go:195] Run: openssl version
	I0923 13:57:57.366378 2443810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:57:57.375852 2443810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:57:57.379443 2443810 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 13:25 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:57:57.379515 2443810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:57:57.387283 2443810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:57:57.396291 2443810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2383070.pem && ln -fs /usr/share/ca-certificates/2383070.pem /etc/ssl/certs/2383070.pem"
	I0923 13:57:57.405578 2443810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2383070.pem
	I0923 13:57:57.409142 2443810 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 13:44 /usr/share/ca-certificates/2383070.pem
	I0923 13:57:57.409206 2443810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2383070.pem
	I0923 13:57:57.417400 2443810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2383070.pem /etc/ssl/certs/51391683.0"
	I0923 13:57:57.426719 2443810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23830702.pem && ln -fs /usr/share/ca-certificates/23830702.pem /etc/ssl/certs/23830702.pem"
	I0923 13:57:57.436110 2443810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23830702.pem
	I0923 13:57:57.440105 2443810 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 13:44 /usr/share/ca-certificates/23830702.pem
	I0923 13:57:57.440191 2443810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23830702.pem
	I0923 13:57:57.447249 2443810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/23830702.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 13:57:57.456512 2443810 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:57:57.459977 2443810 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 13:57:57.460020 2443810 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.1  false true} ...
	I0923 13:57:57.460138 2443810 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-952506-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-952506 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:57:57.460209 2443810 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:57:57.468828 2443810 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:57:57.468908 2443810 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0923 13:57:57.477786 2443810 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0923 13:57:57.499515 2443810 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:57:57.520139 2443810 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0923 13:57:57.524005 2443810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:57:57.535642 2443810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:57:57.620167 2443810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:57:57.632274 2443810 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0923 13:57:57.632630 2443810 config.go:182] Loaded profile config "ha-952506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:57:57.635155 2443810 out.go:177] * Verifying Kubernetes components...
	I0923 13:57:57.637792 2443810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:57:57.719131 2443810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:57:57.734220 2443810 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19690-2377681/kubeconfig
	I0923 13:57:57.734559 2443810 kapi.go:59] client config for ha-952506: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/client.crt", KeyFile:"/home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/ha-952506/client.key", CAFile:"/home/jenkins/minikube-integration/19690-2377681/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a16ec0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0923 13:57:57.734633 2443810 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0923 13:57:57.734858 2443810 node_ready.go:35] waiting up to 6m0s for node "ha-952506-m04" to be "Ready" ...
	I0923 13:57:57.734932 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m04
	I0923 13:57:57.734944 2443810 round_trippers.go:469] Request Headers:
	I0923 13:57:57.734953 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:57:57.734957 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:57:57.737993 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:57:58.235685 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m04
	I0923 13:57:58.235712 2443810 round_trippers.go:469] Request Headers:
	I0923 13:57:58.235722 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:57:58.235726 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:57:58.238871 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:57:58.735179 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m04
	I0923 13:57:58.735205 2443810 round_trippers.go:469] Request Headers:
	I0923 13:57:58.735215 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:57:58.735220 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:57:58.738093 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:57:59.235989 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m04
	I0923 13:57:59.236011 2443810 round_trippers.go:469] Request Headers:
	I0923 13:57:59.236022 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:57:59.236026 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:57:59.239130 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:57:59.735098 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m04
	I0923 13:57:59.735121 2443810 round_trippers.go:469] Request Headers:
	I0923 13:57:59.735130 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:57:59.735135 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:57:59.737936 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:57:59.738623 2443810 node_ready.go:53] node "ha-952506-m04" has status "Ready":"Unknown"
	I0923 13:58:00.235319 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m04
	I0923 13:58:00.235346 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:00.235367 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:00.235375 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:00.239007 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:00.735095 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m04
	I0923 13:58:00.735119 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:00.735129 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:00.735135 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:00.738207 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:01.235045 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m04
	I0923 13:58:01.235068 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:01.235079 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:01.235083 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:01.238148 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:01.735085 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m04
	I0923 13:58:01.735107 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:01.735117 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:01.735122 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:01.738238 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:01.738850 2443810 node_ready.go:53] node "ha-952506-m04" has status "Ready":"Unknown"
	I0923 13:58:02.235431 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m04
	I0923 13:58:02.235455 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:02.235465 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:02.235472 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:02.238503 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:02.735115 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m04
	I0923 13:58:02.735139 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:02.735149 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:02.735156 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:02.738200 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:03.235549 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m04
	I0923 13:58:03.235572 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:03.235582 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:03.235587 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:03.238550 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:03.735039 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m04
	I0923 13:58:03.735065 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:03.735075 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:03.735079 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:03.737961 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:03.738671 2443810 node_ready.go:49] node "ha-952506-m04" has status "Ready":"True"
	I0923 13:58:03.738696 2443810 node_ready.go:38] duration metric: took 6.003819902s for node "ha-952506-m04" to be "Ready" ...
	I0923 13:58:03.738707 2443810 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:58:03.738775 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0923 13:58:03.738795 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:03.738804 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:03.738814 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:03.744908 2443810 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:58:03.751979 2443810 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9sjjq" in "kube-system" namespace to be "Ready" ...
	I0923 13:58:03.752094 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:03.752106 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:03.752115 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:03.752120 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:03.755269 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:03.755941 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:03.755953 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:03.755961 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:03.755965 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:03.758785 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:04.252519 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:04.252549 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:04.252559 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:04.252564 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:04.255770 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:04.256533 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:04.256550 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:04.256560 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:04.256564 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:04.259177 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:04.752246 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:04.752271 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:04.752281 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:04.752285 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:04.755467 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:04.756220 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:04.756241 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:04.756249 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:04.756254 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:04.759163 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:05.252192 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:05.252217 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:05.252225 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:05.252232 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:05.255654 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:05.256459 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:05.256484 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:05.256493 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:05.256499 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:05.259607 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:05.752702 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:05.752728 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:05.752737 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:05.752741 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:05.755910 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:05.757006 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:05.757030 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:05.757040 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:05.757046 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:05.760002 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:05.760722 2443810 pod_ready.go:103] pod "coredns-7c65d6cfc9-9sjjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:58:06.252428 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:06.252453 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:06.252463 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:06.252467 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:06.255870 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:06.256661 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:06.256709 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:06.256732 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:06.256755 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:06.259649 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:06.752573 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:06.752604 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:06.752617 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:06.752623 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:06.756285 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:06.757094 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:06.757114 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:06.757123 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:06.757129 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:06.759924 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:07.252258 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:07.252278 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:07.252288 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:07.252292 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:07.255414 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:07.256179 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:07.256203 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:07.256225 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:07.256230 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:07.259527 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:07.753038 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:07.753061 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:07.753071 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:07.753077 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:07.757089 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:07.757973 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:07.757995 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:07.758004 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:07.758008 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:07.761133 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:07.761636 2443810 pod_ready.go:103] pod "coredns-7c65d6cfc9-9sjjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:58:08.252256 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:08.252280 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:08.252289 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:08.252293 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:08.255482 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:08.256665 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:08.256689 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:08.256699 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:08.256704 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:08.259707 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:08.752224 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:08.752249 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:08.752259 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:08.752263 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:08.755508 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:08.756312 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:08.756330 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:08.756340 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:08.756344 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:08.759248 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:09.252513 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:09.252536 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:09.252547 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:09.252553 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:09.255649 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:09.256562 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:09.256591 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:09.256601 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:09.256604 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:09.259304 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:09.752232 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:09.752260 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:09.752271 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:09.752275 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:09.755505 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:09.756284 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:09.756305 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:09.756314 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:09.756319 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:09.759166 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:10.253150 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:10.253173 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:10.253181 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:10.253187 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:10.256546 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:10.257496 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:10.257516 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:10.257525 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:10.257529 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:10.260463 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:10.260959 2443810 pod_ready.go:103] pod "coredns-7c65d6cfc9-9sjjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:58:10.752770 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:10.752794 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:10.752804 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:10.752808 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:10.756052 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:10.757118 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:10.757138 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:10.757147 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:10.757151 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:10.759998 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:11.253094 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:11.253118 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:11.253127 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:11.253132 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:11.256403 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:11.257156 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:11.257180 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:11.257190 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:11.257195 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:11.260200 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:11.752808 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:11.752832 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:11.752841 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:11.752845 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:11.756070 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:11.756841 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:11.756858 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:11.756867 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:11.756871 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:11.759448 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:12.252313 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:12.252336 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:12.252346 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:12.252349 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:12.255593 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:12.256364 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:12.256384 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:12.256393 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:12.256397 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:12.259197 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:12.753110 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:12.753136 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:12.753147 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:12.753153 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:12.756404 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:12.757042 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:12.757054 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:12.757063 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:12.757067 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:12.759811 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:12.760290 2443810 pod_ready.go:103] pod "coredns-7c65d6cfc9-9sjjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:58:13.252843 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:13.252863 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:13.252872 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:13.252877 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:13.256087 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:13.257333 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:13.257393 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:13.257416 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:13.257443 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:13.260131 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:13.752585 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:13.752608 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:13.752618 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:13.752623 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:13.755943 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:13.756744 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:13.756766 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:13.756776 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:13.756782 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:13.760172 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:14.252947 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:14.252971 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:14.252980 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:14.252986 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:14.256335 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:14.257099 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:14.257121 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:14.257132 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:14.257137 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:14.259910 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:14.752883 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:14.752907 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:14.752917 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:14.752922 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:14.756207 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:14.757151 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:14.757172 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:14.757182 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:14.757187 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:14.759970 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:14.760731 2443810 pod_ready.go:103] pod "coredns-7c65d6cfc9-9sjjq" in "kube-system" namespace has status "Ready":"False"
	I0923 13:58:15.253020 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:15.253044 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:15.253055 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:15.253059 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:15.256482 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:15.257279 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:15.257300 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:15.257309 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:15.257315 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:15.260022 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:15.752223 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9sjjq
	I0923 13:58:15.752249 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:15.752258 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:15.752263 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:15.765293 2443810 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0923 13:58:15.766483 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:15.766503 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:15.766512 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:15.766517 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:15.783490 2443810 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0923 13:58:15.784429 2443810 pod_ready.go:98] node "ha-952506" hosting pod "coredns-7c65d6cfc9-9sjjq" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-952506" has status "Ready":"Unknown"
	I0923 13:58:15.784455 2443810 pod_ready.go:82] duration metric: took 12.032440033s for pod "coredns-7c65d6cfc9-9sjjq" in "kube-system" namespace to be "Ready" ...
	E0923 13:58:15.784466 2443810 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-952506" hosting pod "coredns-7c65d6cfc9-9sjjq" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-952506" has status "Ready":"Unknown"
	I0923 13:58:15.784482 2443810 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zwchv" in "kube-system" namespace to be "Ready" ...
	I0923 13:58:15.784563 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zwchv
	I0923 13:58:15.784572 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:15.784581 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:15.784587 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:15.792673 2443810 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 13:58:15.793766 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:15.793788 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:15.793795 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:15.793803 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:15.806916 2443810 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0923 13:58:15.808047 2443810 pod_ready.go:98] node "ha-952506" hosting pod "coredns-7c65d6cfc9-zwchv" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-952506" has status "Ready":"Unknown"
	I0923 13:58:15.808069 2443810 pod_ready.go:82] duration metric: took 23.57808ms for pod "coredns-7c65d6cfc9-zwchv" in "kube-system" namespace to be "Ready" ...
	E0923 13:58:15.808091 2443810 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-952506" hosting pod "coredns-7c65d6cfc9-zwchv" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-952506" has status "Ready":"Unknown"
	I0923 13:58:15.808099 2443810 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-952506" in "kube-system" namespace to be "Ready" ...
	I0923 13:58:15.808183 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-952506
	I0923 13:58:15.808193 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:15.808202 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:15.808214 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:15.813604 2443810 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:58:15.814567 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:15.814586 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:15.814595 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:15.814600 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:15.822660 2443810 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 13:58:15.823637 2443810 pod_ready.go:98] node "ha-952506" hosting pod "etcd-ha-952506" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-952506" has status "Ready":"Unknown"
	I0923 13:58:15.823662 2443810 pod_ready.go:82] duration metric: took 15.553795ms for pod "etcd-ha-952506" in "kube-system" namespace to be "Ready" ...
	E0923 13:58:15.823673 2443810 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-952506" hosting pod "etcd-ha-952506" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-952506" has status "Ready":"Unknown"
	I0923 13:58:15.823680 2443810 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-952506-m02" in "kube-system" namespace to be "Ready" ...
	I0923 13:58:15.823769 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-952506-m02
	I0923 13:58:15.823777 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:15.823785 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:15.823793 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:15.833353 2443810 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 13:58:15.834294 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m02
	I0923 13:58:15.834331 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:15.834341 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:15.834345 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:15.850112 2443810 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0923 13:58:15.851012 2443810 pod_ready.go:93] pod "etcd-ha-952506-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 13:58:15.851068 2443810 pod_ready.go:82] duration metric: took 27.368768ms for pod "etcd-ha-952506-m02" in "kube-system" namespace to be "Ready" ...
	I0923 13:58:15.851105 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-952506" in "kube-system" namespace to be "Ready" ...
	I0923 13:58:15.851206 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-952506
	I0923 13:58:15.851231 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:15.851255 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:15.851275 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:15.856150 2443810 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:58:15.861899 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:15.861961 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:15.861984 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:15.862004 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:15.865152 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:15.866123 2443810 pod_ready.go:98] node "ha-952506" hosting pod "kube-apiserver-ha-952506" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-952506" has status "Ready":"Unknown"
	I0923 13:58:15.866179 2443810 pod_ready.go:82] duration metric: took 15.053434ms for pod "kube-apiserver-ha-952506" in "kube-system" namespace to be "Ready" ...
	E0923 13:58:15.866205 2443810 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-952506" hosting pod "kube-apiserver-ha-952506" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-952506" has status "Ready":"Unknown"
	I0923 13:58:15.866224 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-952506-m02" in "kube-system" namespace to be "Ready" ...
	I0923 13:58:15.952488 2443810 request.go:632] Waited for 86.156266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-952506-m02
	I0923 13:58:15.952603 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-952506-m02
	I0923 13:58:15.952635 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:15.952645 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:15.952650 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:15.956457 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:16.152510 2443810 request.go:632] Waited for 195.199693ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506-m02
	I0923 13:58:16.152565 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m02
	I0923 13:58:16.152571 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:16.152580 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:16.152590 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:16.155390 2443810 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:58:16.155988 2443810 pod_ready.go:93] pod "kube-apiserver-ha-952506-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 13:58:16.156041 2443810 pod_ready.go:82] duration metric: took 289.781374ms for pod "kube-apiserver-ha-952506-m02" in "kube-system" namespace to be "Ready" ...
	I0923 13:58:16.156062 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-952506" in "kube-system" namespace to be "Ready" ...
	I0923 13:58:16.352376 2443810 request.go:632] Waited for 196.244139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-952506
	I0923 13:58:16.352461 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-952506
	I0923 13:58:16.352498 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:16.352512 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:16.352517 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:16.355773 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:16.552899 2443810 request.go:632] Waited for 196.323005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:16.552999 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:16.553013 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:16.553023 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:16.553027 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:16.556154 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:16.556766 2443810 pod_ready.go:98] node "ha-952506" hosting pod "kube-controller-manager-ha-952506" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-952506" has status "Ready":"Unknown"
	I0923 13:58:16.556789 2443810 pod_ready.go:82] duration metric: took 400.716521ms for pod "kube-controller-manager-ha-952506" in "kube-system" namespace to be "Ready" ...
	E0923 13:58:16.556801 2443810 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-952506" hosting pod "kube-controller-manager-ha-952506" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-952506" has status "Ready":"Unknown"
	I0923 13:58:16.556810 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-952506-m02" in "kube-system" namespace to be "Ready" ...
	I0923 13:58:16.752285 2443810 request.go:632] Waited for 195.383409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-952506-m02
	I0923 13:58:16.752399 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-952506-m02
	I0923 13:58:16.752438 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:16.752478 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:16.752496 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:16.756087 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:16.953241 2443810 request.go:632] Waited for 196.238002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506-m02
	I0923 13:58:16.953301 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m02
	I0923 13:58:16.953307 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:16.953317 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:16.953325 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:16.957226 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:16.957829 2443810 pod_ready.go:93] pod "kube-controller-manager-ha-952506-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 13:58:16.957850 2443810 pod_ready.go:82] duration metric: took 401.007567ms for pod "kube-controller-manager-ha-952506-m02" in "kube-system" namespace to be "Ready" ...
	I0923 13:58:16.957898 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9w2p2" in "kube-system" namespace to be "Ready" ...
	I0923 13:58:17.152757 2443810 request.go:632] Waited for 194.778985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9w2p2
	I0923 13:58:17.152924 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9w2p2
	I0923 13:58:17.152954 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:17.152975 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:17.152994 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:17.157481 2443810 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:58:17.352564 2443810 request.go:632] Waited for 194.324784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506-m04
	I0923 13:58:17.352648 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m04
	I0923 13:58:17.352671 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:17.352689 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:17.352695 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:17.355859 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:17.356617 2443810 pod_ready.go:93] pod "kube-proxy-9w2p2" in "kube-system" namespace has status "Ready":"True"
	I0923 13:58:17.356638 2443810 pod_ready.go:82] duration metric: took 398.725726ms for pod "kube-proxy-9w2p2" in "kube-system" namespace to be "Ready" ...
	I0923 13:58:17.356651 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qqlbp" in "kube-system" namespace to be "Ready" ...
	I0923 13:58:17.552777 2443810 request.go:632] Waited for 196.059922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqlbp
	I0923 13:58:17.552846 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qqlbp
	I0923 13:58:17.552872 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:17.552887 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:17.552891 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:17.556015 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:17.752424 2443810 request.go:632] Waited for 195.727147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:17.752506 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:17.752563 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:17.752576 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:17.752581 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:17.755673 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:17.756256 2443810 pod_ready.go:98] node "ha-952506" hosting pod "kube-proxy-qqlbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-952506" has status "Ready":"Unknown"
	I0923 13:58:17.756280 2443810 pod_ready.go:82] duration metric: took 399.62095ms for pod "kube-proxy-qqlbp" in "kube-system" namespace to be "Ready" ...
	E0923 13:58:17.756292 2443810 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-952506" hosting pod "kube-proxy-qqlbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-952506" has status "Ready":"Unknown"
	I0923 13:58:17.756305 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s598q" in "kube-system" namespace to be "Ready" ...
	I0923 13:58:17.952698 2443810 request.go:632] Waited for 196.314915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s598q
	I0923 13:58:17.952768 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s598q
	I0923 13:58:17.952780 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:17.952789 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:17.952801 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:17.959770 2443810 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:58:18.152939 2443810 request.go:632] Waited for 192.103619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506-m02
	I0923 13:58:18.153015 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m02
	I0923 13:58:18.153026 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:18.153035 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:18.153084 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:18.160164 2443810 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 13:58:18.160699 2443810 pod_ready.go:93] pod "kube-proxy-s598q" in "kube-system" namespace has status "Ready":"True"
	I0923 13:58:18.160719 2443810 pod_ready.go:82] duration metric: took 404.404975ms for pod "kube-proxy-s598q" in "kube-system" namespace to be "Ready" ...
	I0923 13:58:18.160731 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-952506" in "kube-system" namespace to be "Ready" ...
	I0923 13:58:18.353008 2443810 request.go:632] Waited for 192.205171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-952506
	I0923 13:58:18.353091 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-952506
	I0923 13:58:18.353117 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:18.353132 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:18.353139 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:18.356465 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:18.552585 2443810 request.go:632] Waited for 195.322602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:18.552670 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506
	I0923 13:58:18.552695 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:18.552710 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:18.552732 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:18.556054 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:18.556770 2443810 pod_ready.go:98] node "ha-952506" hosting pod "kube-scheduler-ha-952506" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-952506" has status "Ready":"Unknown"
	I0923 13:58:18.556792 2443810 pod_ready.go:82] duration metric: took 396.04917ms for pod "kube-scheduler-ha-952506" in "kube-system" namespace to be "Ready" ...
	E0923 13:58:18.556802 2443810 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-952506" hosting pod "kube-scheduler-ha-952506" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-952506" has status "Ready":"Unknown"
	I0923 13:58:18.556809 2443810 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-952506-m02" in "kube-system" namespace to be "Ready" ...
	I0923 13:58:18.752593 2443810 request.go:632] Waited for 195.704804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-952506-m02
	I0923 13:58:18.752717 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-952506-m02
	I0923 13:58:18.752729 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:18.752738 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:18.752744 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:18.755912 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:18.952887 2443810 request.go:632] Waited for 196.343345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-952506-m02
	I0923 13:58:18.952990 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-952506-m02
	I0923 13:58:18.953041 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:18.953066 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:18.953076 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:18.956276 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:18.957005 2443810 pod_ready.go:93] pod "kube-scheduler-ha-952506-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 13:58:18.957029 2443810 pod_ready.go:82] duration metric: took 400.208555ms for pod "kube-scheduler-ha-952506-m02" in "kube-system" namespace to be "Ready" ...
	I0923 13:58:18.957043 2443810 pod_ready.go:39] duration metric: took 15.218325334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:58:18.957059 2443810 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 13:58:18.957125 2443810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:58:18.969393 2443810 system_svc.go:56] duration metric: took 12.325344ms WaitForService to wait for kubelet
	I0923 13:58:18.969423 2443810 kubeadm.go:582] duration metric: took 21.337106322s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:58:18.969443 2443810 node_conditions.go:102] verifying NodePressure condition ...
	I0923 13:58:19.152830 2443810 request.go:632] Waited for 183.312386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0923 13:58:19.152887 2443810 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0923 13:58:19.152894 2443810 round_trippers.go:469] Request Headers:
	I0923 13:58:19.152903 2443810 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0923 13:58:19.152911 2443810 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:58:19.156511 2443810 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:58:19.157680 2443810 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0923 13:58:19.157717 2443810 node_conditions.go:123] node cpu capacity is 2
	I0923 13:58:19.157737 2443810 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0923 13:58:19.157743 2443810 node_conditions.go:123] node cpu capacity is 2
	I0923 13:58:19.157747 2443810 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0923 13:58:19.157763 2443810 node_conditions.go:123] node cpu capacity is 2
	I0923 13:58:19.157768 2443810 node_conditions.go:105] duration metric: took 188.31998ms to run NodePressure ...
	I0923 13:58:19.157796 2443810 start.go:241] waiting for startup goroutines ...
	I0923 13:58:19.157822 2443810 start.go:255] writing updated cluster config ...
	I0923 13:58:19.158198 2443810 ssh_runner.go:195] Run: rm -f paused
	I0923 13:58:19.226628 2443810 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 13:58:19.229378 2443810 out.go:177] * Done! kubectl is now configured to use "ha-952506" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 13:57:42 ha-952506 crio[641]: time="2024-09-23 13:57:42.974780471Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2843a6f7beaad21ec6ebe35a20c49737bf2ce2d2e7a19e5ff899f9d92ddc0432/merged/etc/group: no such file or directory"
	Sep 23 13:57:43 ha-952506 crio[641]: time="2024-09-23 13:57:43.051801134Z" level=info msg="Created container 6979bba128f9a15b4214af170a1b27c9622103c6c2933523ddbe99918250112f: kube-system/storage-provisioner/storage-provisioner" id=f631fa01-f0df-4e5c-bf23-c784807d345e name=/runtime.v1.RuntimeService/CreateContainer
	Sep 23 13:57:43 ha-952506 crio[641]: time="2024-09-23 13:57:43.053039644Z" level=info msg="Starting container: 6979bba128f9a15b4214af170a1b27c9622103c6c2933523ddbe99918250112f" id=af7c3e30-fb62-40c0-bfed-652b39dbac95 name=/runtime.v1.RuntimeService/StartContainer
	Sep 23 13:57:43 ha-952506 crio[641]: time="2024-09-23 13:57:43.061025316Z" level=info msg="Started container" PID=1838 containerID=6979bba128f9a15b4214af170a1b27c9622103c6c2933523ddbe99918250112f description=kube-system/storage-provisioner/storage-provisioner id=af7c3e30-fb62-40c0-bfed-652b39dbac95 name=/runtime.v1.RuntimeService/StartContainer sandboxID=050d082cc688eacc8720c65d12a1948b4c22c8646094eb3cf62ba5009e316e56
	Sep 23 13:57:46 ha-952506 crio[641]: time="2024-09-23 13:57:46.711590339Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.1" id=c76c68f0-425b-43a4-b44a-d63e977d51db name=/runtime.v1.ImageService/ImageStatus
	Sep 23 13:57:46 ha-952506 crio[641]: time="2024-09-23 13:57:46.711821841Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1 registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=c76c68f0-425b-43a4-b44a-d63e977d51db name=/runtime.v1.ImageService/ImageStatus
	Sep 23 13:57:46 ha-952506 crio[641]: time="2024-09-23 13:57:46.712707096Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.1" id=6ab713f1-b6ed-410c-a117-be4b828304d3 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 13:57:46 ha-952506 crio[641]: time="2024-09-23 13:57:46.712905893Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1 registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=6ab713f1-b6ed-410c-a117-be4b828304d3 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 13:57:46 ha-952506 crio[641]: time="2024-09-23 13:57:46.714619444Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-952506/kube-controller-manager" id=60e532b1-3237-4300-8ffa-9bed9f47e84d name=/runtime.v1.RuntimeService/CreateContainer
	Sep 23 13:57:46 ha-952506 crio[641]: time="2024-09-23 13:57:46.714727642Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 23 13:57:46 ha-952506 crio[641]: time="2024-09-23 13:57:46.809690631Z" level=info msg="Created container c3196d892971841661d80e8de75b81dd9fcfd87261f93ad3e399e9bf65ba0819: kube-system/kube-controller-manager-ha-952506/kube-controller-manager" id=60e532b1-3237-4300-8ffa-9bed9f47e84d name=/runtime.v1.RuntimeService/CreateContainer
	Sep 23 13:57:46 ha-952506 crio[641]: time="2024-09-23 13:57:46.810250938Z" level=info msg="Starting container: c3196d892971841661d80e8de75b81dd9fcfd87261f93ad3e399e9bf65ba0819" id=d9d5236c-b186-497a-9a1e-656578800b13 name=/runtime.v1.RuntimeService/StartContainer
	Sep 23 13:57:46 ha-952506 crio[641]: time="2024-09-23 13:57:46.822883786Z" level=info msg="Started container" PID=1882 containerID=c3196d892971841661d80e8de75b81dd9fcfd87261f93ad3e399e9bf65ba0819 description=kube-system/kube-controller-manager-ha-952506/kube-controller-manager id=d9d5236c-b186-497a-9a1e-656578800b13 name=/runtime.v1.RuntimeService/StartContainer sandboxID=22b7ebb8418c945837ff573b21159d5f325f6259a159f1de8b6cd4d72196f89d
	Sep 23 13:57:52 ha-952506 crio[641]: time="2024-09-23 13:57:52.619880438Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 23 13:57:52 ha-952506 crio[641]: time="2024-09-23 13:57:52.625863098Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 23 13:57:52 ha-952506 crio[641]: time="2024-09-23 13:57:52.625899511Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 23 13:57:52 ha-952506 crio[641]: time="2024-09-23 13:57:52.625922042Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 23 13:57:52 ha-952506 crio[641]: time="2024-09-23 13:57:52.629155581Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 23 13:57:52 ha-952506 crio[641]: time="2024-09-23 13:57:52.629194300Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 23 13:57:52 ha-952506 crio[641]: time="2024-09-23 13:57:52.629210366Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 23 13:57:52 ha-952506 crio[641]: time="2024-09-23 13:57:52.632176432Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 23 13:57:52 ha-952506 crio[641]: time="2024-09-23 13:57:52.632213904Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 23 13:57:52 ha-952506 crio[641]: time="2024-09-23 13:57:52.632235639Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 23 13:57:52 ha-952506 crio[641]: time="2024-09-23 13:57:52.635495582Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 23 13:57:52 ha-952506 crio[641]: time="2024-09-23 13:57:52.635531388Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c3196d8929718       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   34 seconds ago       Running             kube-controller-manager   8                   22b7ebb8418c9       kube-controller-manager-ha-952506
	6979bba128f9a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   38 seconds ago       Running             storage-provisioner       4                   050d082cc688e       storage-provisioner
	fc1cbeaa283d4       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   39 seconds ago       Running             kube-vip                  3                   89171cacd6cb6       kube-vip-ha-952506
	529a10ef47f16       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   43 seconds ago       Running             kube-apiserver            4                   ad93e6786733a       kube-apiserver-ha-952506
	a14a14bc18e99       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   About a minute ago   Running             coredns                   2                   c8765cddd8939       coredns-7c65d6cfc9-9sjjq
	cc92f423365c8       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   About a minute ago   Running             coredns                   2                   01172cfdbe007       coredns-7c65d6cfc9-zwchv
	aa46d8f88d7a4       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d   About a minute ago   Running             kube-proxy                2                   d527868349bc9       kube-proxy-qqlbp
	5bbb791962a77       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51   About a minute ago   Running             kindnet-cni               2                   98e7a7028a618       kindnet-f4gmw
	5bf103081b6e1       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   About a minute ago   Running             busybox                   2                   e115375d05fa6       busybox-7dff88458-mm8mn
	40565961cacec       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Exited              storage-provisioner       3                   050d082cc688e       storage-provisioner
	2d05c57aee66f       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   About a minute ago   Exited              kube-controller-manager   7                   22b7ebb8418c9       kube-controller-manager-ha-952506
	6d88787b5e295       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   About a minute ago   Exited              kube-vip                  2                   89171cacd6cb6       kube-vip-ha-952506
	e7dbdce650f16       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d   About a minute ago   Running             kube-scheduler            2                   7e37d6d9e476e       kube-scheduler-ha-952506
	9738d35f1f4b7       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   About a minute ago   Exited              kube-apiserver            3                   ad93e6786733a       kube-apiserver-ha-952506
	121c27d8301a9       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da   About a minute ago   Running             etcd                      2                   28fac2ff04539       etcd-ha-952506
	
	
	==> coredns [a14a14bc18e992453f7206d9b61320aa8676a0bc6f08a7a62fc2fe65c917fcbe] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52804 - 41363 "HINFO IN 7374346401105266089.1814598958067444078. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01267425s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1966547222]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 13:57:12.436) (total time: 30010ms):
	Trace[1966547222]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30010ms (13:57:42.446)
	Trace[1966547222]: [30.010964849s] [30.010964849s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[241529393]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 13:57:12.447) (total time: 30003ms):
	Trace[241529393]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (13:57:42.450)
	Trace[241529393]: [30.003804118s] [30.003804118s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1729109496]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 13:57:12.429) (total time: 30021ms):
	Trace[1729109496]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30021ms (13:57:42.451)
	Trace[1729109496]: [30.021587414s] [30.021587414s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [cc92f423365c8542d5416cce7bcaeb2208b0f50014c7914797068396c11172f2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55184 - 17500 "HINFO IN 9043950521796556722.4984364566818976089. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031487782s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[341626086]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 13:57:12.385) (total time: 30002ms):
	Trace[341626086]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (13:57:42.386)
	Trace[341626086]: [30.002188335s] [30.002188335s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[781552716]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 13:57:12.385) (total time: 30002ms):
	Trace[781552716]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (13:57:42.387)
	Trace[781552716]: [30.002179779s] [30.002179779s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[581328003]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 13:57:12.385) (total time: 30002ms):
	Trace[581328003]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (13:57:42.388)
	Trace[581328003]: [30.002978044s] [30.002978044s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-952506
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-952506
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-952506
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T13_48_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:48:11 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-952506
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:57:32 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 23 Sep 2024 13:57:02 +0000   Mon, 23 Sep 2024 13:58:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 23 Sep 2024 13:57:02 +0000   Mon, 23 Sep 2024 13:58:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 23 Sep 2024 13:57:02 +0000   Mon, 23 Sep 2024 13:58:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 23 Sep 2024 13:57:02 +0000   Mon, 23 Sep 2024 13:58:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-952506
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 a4962ccf15114db79f7384146a23660b
	  System UUID:                2cc7adf2-8a67-4bc4-a39f-3ba3d0687300
	  Boot ID:                    97839423-83c8-4f76-b1f5-7b978ef1271e
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mm8mn              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m48s
	  kube-system                 coredns-7c65d6cfc9-9sjjq             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 coredns-7c65d6cfc9-zwchv             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 etcd-ha-952506                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-f4gmw                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-952506             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-952506    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-qqlbp                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-952506             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-952506                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 69s                    kube-proxy       
	  Normal   Starting                 4m34s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node ha-952506 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node ha-952506 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node ha-952506 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node ha-952506 event: Registered Node ha-952506 in Controller
	  Normal   NodeReady                9m53s                  kubelet          Node ha-952506 status is now: NodeReady
	  Normal   RegisteredNode           9m36s                  node-controller  Node ha-952506 event: Registered Node ha-952506 in Controller
	  Normal   RegisteredNode           8m28s                  node-controller  Node ha-952506 event: Registered Node ha-952506 in Controller
	  Normal   RegisteredNode           6m1s                   node-controller  Node ha-952506 event: Registered Node ha-952506 in Controller
	  Normal   NodeHasSufficientPID     5m24s (x7 over 5m24s)  kubelet          Node ha-952506 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    5m24s (x8 over 5m24s)  kubelet          Node ha-952506 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  5m24s (x8 over 5m24s)  kubelet          Node ha-952506 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 5m24s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 5m24s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           4m53s                  node-controller  Node ha-952506 event: Registered Node ha-952506 in Controller
	  Normal   RegisteredNode           3m54s                  node-controller  Node ha-952506 event: Registered Node ha-952506 in Controller
	  Normal   RegisteredNode           3m30s                  node-controller  Node ha-952506 event: Registered Node ha-952506 in Controller
	  Normal   Starting                 117s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 117s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  117s (x8 over 117s)    kubelet          Node ha-952506 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    117s (x8 over 117s)    kubelet          Node ha-952506 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     117s (x7 over 117s)    kubelet          Node ha-952506 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           81s                    node-controller  Node ha-952506 event: Registered Node ha-952506 in Controller
	  Normal   RegisteredNode           32s                    node-controller  Node ha-952506 event: Registered Node ha-952506 in Controller
	  Normal   NodeNotReady             6s                     node-controller  Node ha-952506 status is now: NodeNotReady
	
	
	Name:               ha-952506-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-952506-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-952506
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T13_48_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:48:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-952506-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:58:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:56:56 +0000   Mon, 23 Sep 2024 13:48:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:56:56 +0000   Mon, 23 Sep 2024 13:48:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:56:56 +0000   Mon, 23 Sep 2024 13:48:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:56:56 +0000   Mon, 23 Sep 2024 13:49:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-952506-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 30b1878460bd42978af9494efbaabb6b
	  System UUID:                e9ab0ceb-4733-4375-ac75-ec1379d43f05
	  Boot ID:                    97839423-83c8-4f76-b1f5-7b978ef1271e
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-94cn4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	  kube-system                 etcd-ha-952506-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9m45s
	  kube-system                 kindnet-bnkzg                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m47s
	  kube-system                 kube-apiserver-ha-952506-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m45s
	  kube-system                 kube-controller-manager-ha-952506-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m44s
	  kube-system                 kube-proxy-s598q                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m47s
	  kube-system                 kube-scheduler-ha-952506-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m40s
	  kube-system                 kube-vip-ha-952506-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m41s                  kube-proxy       
	  Normal   Starting                 4m23s                  kube-proxy       
	  Normal   Starting                 6m4s                   kube-proxy       
	  Normal   Starting                 69s                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    9m47s (x8 over 9m47s)  kubelet          Node ha-952506-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           9m47s                  node-controller  Node ha-952506-m02 event: Registered Node ha-952506-m02 in Controller
	  Normal   NodeHasSufficientMemory  9m47s (x8 over 9m47s)  kubelet          Node ha-952506-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     9m47s (x7 over 9m47s)  kubelet          Node ha-952506-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m37s                  node-controller  Node ha-952506-m02 event: Registered Node ha-952506-m02 in Controller
	  Normal   RegisteredNode           8m29s                  node-controller  Node ha-952506-m02 event: Registered Node ha-952506-m02 in Controller
	  Normal   NodeHasSufficientPID     6m33s (x7 over 6m33s)  kubelet          Node ha-952506-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    6m33s (x8 over 6m33s)  kubelet          Node ha-952506-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 6m33s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m33s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m33s (x8 over 6m33s)  kubelet          Node ha-952506-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           6m2s                   node-controller  Node ha-952506-m02 event: Registered Node ha-952506-m02 in Controller
	  Normal   NodeHasSufficientMemory  5m23s (x8 over 5m23s)  kubelet          Node ha-952506-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     5m23s (x7 over 5m23s)  kubelet          Node ha-952506-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    5m23s (x8 over 5m23s)  kubelet          Node ha-952506-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 5m23s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m23s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           4m54s                  node-controller  Node ha-952506-m02 event: Registered Node ha-952506-m02 in Controller
	  Normal   RegisteredNode           3m55s                  node-controller  Node ha-952506-m02 event: Registered Node ha-952506-m02 in Controller
	  Normal   RegisteredNode           3m31s                  node-controller  Node ha-952506-m02 event: Registered Node ha-952506-m02 in Controller
	  Normal   Starting                 116s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 116s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  115s (x8 over 116s)    kubelet          Node ha-952506-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    115s (x8 over 116s)    kubelet          Node ha-952506-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     115s (x7 over 116s)    kubelet          Node ha-952506-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           82s                    node-controller  Node ha-952506-m02 event: Registered Node ha-952506-m02 in Controller
	  Normal   RegisteredNode           33s                    node-controller  Node ha-952506-m02 event: Registered Node ha-952506-m02 in Controller
	
	
	Name:               ha-952506-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-952506-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-952506
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T13_50_56_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:50:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-952506-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:58:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:58:03 +0000   Mon, 23 Sep 2024 13:58:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:58:03 +0000   Mon, 23 Sep 2024 13:58:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:58:03 +0000   Mon, 23 Sep 2024 13:58:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:58:03 +0000   Mon, 23 Sep 2024 13:58:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-952506-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 877444ef1ab045eebd5db3cdddcc6dd0
	  System UUID:                b3a9f97b-0cdc-44c9-a78b-bc979727dc75
	  Boot ID:                    97839423-83c8-4f76-b1f5-7b978ef1271e
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hglkc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m53s
	  kube-system                 kindnet-26stp              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m26s
	  kube-system                 kube-proxy-9w2p2           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8s                     kube-proxy       
	  Normal   Starting                 7m24s                  kube-proxy       
	  Normal   Starting                 2m56s                  kube-proxy       
	  Warning  CgroupV1                 7m27s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 7m27s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  7m27s (x2 over 7m27s)  kubelet          Node ha-952506-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m27s (x2 over 7m27s)  kubelet          Node ha-952506-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m27s (x2 over 7m27s)  kubelet          Node ha-952506-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m24s                  node-controller  Node ha-952506-m04 event: Registered Node ha-952506-m04 in Controller
	  Normal   RegisteredNode           7m22s                  node-controller  Node ha-952506-m04 event: Registered Node ha-952506-m04 in Controller
	  Normal   RegisteredNode           7m22s                  node-controller  Node ha-952506-m04 event: Registered Node ha-952506-m04 in Controller
	  Normal   NodeReady                7m14s                  kubelet          Node ha-952506-m04 status is now: NodeReady
	  Normal   RegisteredNode           6m2s                   node-controller  Node ha-952506-m04 event: Registered Node ha-952506-m04 in Controller
	  Normal   RegisteredNode           4m54s                  node-controller  Node ha-952506-m04 event: Registered Node ha-952506-m04 in Controller
	  Normal   NodeNotReady             4m14s                  node-controller  Node ha-952506-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m54s                  node-controller  Node ha-952506-m04 event: Registered Node ha-952506-m04 in Controller
	  Normal   RegisteredNode           3m31s                  node-controller  Node ha-952506-m04 event: Registered Node ha-952506-m04 in Controller
	  Normal   Starting                 3m15s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m15s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     3m8s (x7 over 3m15s)   kubelet          Node ha-952506-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  3m2s (x8 over 3m15s)   kubelet          Node ha-952506-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m2s (x8 over 3m15s)   kubelet          Node ha-952506-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           82s                    node-controller  Node ha-952506-m04 event: Registered Node ha-952506-m04 in Controller
	  Normal   NodeNotReady             42s                    node-controller  Node ha-952506-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           33s                    node-controller  Node ha-952506-m04 event: Registered Node ha-952506-m04 in Controller
	  Normal   Starting                 32s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 32s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     25s (x7 over 32s)      kubelet          Node ha-952506-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  19s (x8 over 32s)      kubelet          Node ha-952506-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19s (x8 over 32s)      kubelet          Node ha-952506-m04 status is now: NodeHasNoDiskPressure
	
	
	==> dmesg <==
	[Sep23 13:41] hrtimer: interrupt took 2926293 ns
	
	
	==> etcd [121c27d8301a93b2e960749cb40b284b1f3bbbbd829ea052b2e1b9f10ef0e6f0] <==
	{"level":"warn","ts":"2024-09-23T13:56:55.162915Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.870035498s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T13:56:55.180523Z","caller":"traceutil/trace.go:171","msg":"trace[133803877] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:2486; }","duration":"5.88763996s","start":"2024-09-23T13:56:49.292874Z","end":"2024-09-23T13:56:55.180514Z","steps":["trace[133803877] 'agreement among raft nodes before linearized reading'  (duration: 5.87002799s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T13:56:55.180552Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T13:56:49.292828Z","time spent":"5.887713427s","remote":"127.0.0.1:49766","response type":"/etcdserverpb.KV/Range","request count":0,"request size":26,"response count":0,"response size":29,"request content":"key:\"/registry/clusterroles\" limit:1 "}
	{"level":"warn","ts":"2024-09-23T13:56:55.163039Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.905444836s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" limit:500 ","response":"range_response_count:55 size:39193"}
	{"level":"info","ts":"2024-09-23T13:56:55.180694Z","caller":"traceutil/trace.go:171","msg":"trace[329100683] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:55; response_revision:2486; }","duration":"5.923098643s","start":"2024-09-23T13:56:49.257588Z","end":"2024-09-23T13:56:55.180686Z","steps":["trace[329100683] 'agreement among raft nodes before linearized reading'  (duration: 5.90533507s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T13:56:55.180723Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T13:56:49.257538Z","time spent":"5.923174718s","remote":"127.0.0.1:49780","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":55,"response size":39217,"request content":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" limit:500 "}
	{"level":"warn","ts":"2024-09-23T13:56:55.163122Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.950377894s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" limit:500 ","response":"range_response_count:12 size:8695"}
	{"level":"info","ts":"2024-09-23T13:56:55.180935Z","caller":"traceutil/trace.go:171","msg":"trace[657375604] range","detail":"{range_begin:/registry/rolebindings/; range_end:/registry/rolebindings0; response_count:12; response_revision:2486; }","duration":"5.968188365s","start":"2024-09-23T13:56:49.212739Z","end":"2024-09-23T13:56:55.180928Z","steps":["trace[657375604] 'agreement among raft nodes before linearized reading'  (duration: 5.9503093s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T13:56:55.180965Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T13:56:49.212688Z","time spent":"5.968264777s","remote":"127.0.0.1:49762","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":12,"response size":8719,"request content":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" limit:500 "}
	{"level":"warn","ts":"2024-09-23T13:56:55.163143Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.97883671s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:500 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T13:56:55.182073Z","caller":"traceutil/trace.go:171","msg":"trace[967710034] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:2486; }","duration":"5.997760895s","start":"2024-09-23T13:56:49.184302Z","end":"2024-09-23T13:56:55.182063Z","steps":["trace[967710034] 'agreement among raft nodes before linearized reading'  (duration: 5.978829088s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T13:56:55.182105Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T13:56:49.184256Z","time spent":"5.997834608s","remote":"127.0.0.1:49908","response type":"/etcdserverpb.KV/Range","request count":0,"request size":91,"response count":0,"response size":29,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" limit:500 "}
	{"level":"warn","ts":"2024-09-23T13:56:55.163166Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.031793503s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:500 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T13:56:55.182261Z","caller":"traceutil/trace.go:171","msg":"trace[1849260986] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:2486; }","duration":"6.050886832s","start":"2024-09-23T13:56:49.131366Z","end":"2024-09-23T13:56:55.182253Z","steps":["trace[1849260986] 'agreement among raft nodes before linearized reading'  (duration: 6.031785544s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T13:56:55.182290Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T13:56:49.131327Z","time spent":"6.050953398s","remote":"127.0.0.1:49456","response type":"/etcdserverpb.KV/Range","request count":0,"request size":121,"response count":0,"response size":29,"request content":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:500 "}
	{"level":"warn","ts":"2024-09-23T13:56:55.163206Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.587162076s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/apiserver-dwjfkhva6bhjvm7y7pripco2ae\" ","response":"range_response_count:1 size:688"}
	{"level":"info","ts":"2024-09-23T13:56:55.182471Z","caller":"traceutil/trace.go:171","msg":"trace[587049062] range","detail":"{range_begin:/registry/leases/kube-system/apiserver-dwjfkhva6bhjvm7y7pripco2ae; range_end:; response_count:1; response_revision:2486; }","duration":"6.606423812s","start":"2024-09-23T13:56:48.576039Z","end":"2024-09-23T13:56:55.182463Z","steps":["trace[587049062] 'agreement among raft nodes before linearized reading'  (duration: 6.587135016s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T13:56:55.182505Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T13:56:48.576001Z","time spent":"6.606494497s","remote":"127.0.0.1:49682","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":712,"request content":"key:\"/registry/leases/kube-system/apiserver-dwjfkhva6bhjvm7y7pripco2ae\" "}
	{"level":"warn","ts":"2024-09-23T13:56:55.163252Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"6.800388959s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-952506-m02\" ","response":"range_response_count:1 size:6102"}
	{"level":"info","ts":"2024-09-23T13:56:55.182655Z","caller":"traceutil/trace.go:171","msg":"trace[430433105] range","detail":"{range_begin:/registry/minions/ha-952506-m02; range_end:; response_count:1; response_revision:2486; }","duration":"6.819789045s","start":"2024-09-23T13:56:48.362859Z","end":"2024-09-23T13:56:55.182648Z","steps":["trace[430433105] 'agreement among raft nodes before linearized reading'  (duration: 6.800355991s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T13:56:55.182677Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T13:56:48.362811Z","time spent":"6.819859649s","remote":"127.0.0.1:49588","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":6126,"request content":"key:\"/registry/minions/ha-952506-m02\" "}
	{"level":"warn","ts":"2024-09-23T13:56:55.140795Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.508086393s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:500 ","response":"range_response_count:4 size:2585"}
	{"level":"info","ts":"2024-09-23T13:56:55.182822Z","caller":"traceutil/trace.go:171","msg":"trace[1222858166] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:4; response_revision:2486; }","duration":"5.550114454s","start":"2024-09-23T13:56:49.632700Z","end":"2024-09-23T13:56:55.182815Z","steps":["trace[1222858166] 'agreement among raft nodes before linearized reading'  (duration: 5.508030461s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T13:56:55.182848Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T13:56:49.632653Z","time spent":"5.550184754s","remote":"127.0.0.1:49504","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":4,"response size":2609,"request content":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" limit:500 "}
	
	
	==> kernel <==
	 13:58:22 up 15:40,  0 users,  load average: 1.78, 2.47, 2.00
	Linux ha-952506 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [5bbb791962a771f66fb8bb5c24f4fb4f025a57775e49cc97f223b67265eef52d] <==
	Trace[1860030862]: [30.001096714s] [30.001096714s] END
	E0923 13:57:42.684586       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0923 13:57:44.284534       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0923 13:57:44.284573       1 metrics.go:61] Registering metrics
	I0923 13:57:44.284653       1 controller.go:374] Syncing nftables rules
	I0923 13:57:52.618387       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0923 13:57:52.618529       1 main.go:322] Node ha-952506-m02 has CIDR [10.244.1.0/24] 
	I0923 13:57:52.619000       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0} 
	I0923 13:57:52.619136       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0923 13:57:52.619175       1 main.go:322] Node ha-952506-m04 has CIDR [10.244.3.0/24] 
	I0923 13:57:52.619530       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0} 
	I0923 13:57:52.619595       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:57:52.619613       1 main.go:299] handling current node
	I0923 13:58:02.625577       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:58:02.625688       1 main.go:299] handling current node
	I0923 13:58:02.625712       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0923 13:58:02.625721       1 main.go:322] Node ha-952506-m02 has CIDR [10.244.1.0/24] 
	I0923 13:58:02.625829       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0923 13:58:02.625842       1 main.go:322] Node ha-952506-m04 has CIDR [10.244.3.0/24] 
	I0923 13:58:12.616788       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 13:58:12.616838       1 main.go:299] handling current node
	I0923 13:58:12.616854       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0923 13:58:12.616860       1 main.go:322] Node ha-952506-m02 has CIDR [10.244.1.0/24] 
	I0923 13:58:12.616995       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0923 13:58:12.617010       1 main.go:322] Node ha-952506-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [529a10ef47f16dc98deb8300898e51f3d10beee7960f2e7c023ba90a9a05e9cc] <==
	I0923 13:57:41.010643       1 establishing_controller.go:81] Starting EstablishingController
	I0923 13:57:41.010732       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0923 13:57:41.010747       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0923 13:57:41.010759       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0923 13:57:41.407987       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0923 13:57:41.408780       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0923 13:57:41.408801       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0923 13:57:41.409090       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0923 13:57:41.409327       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0923 13:57:41.409380       1 shared_informer.go:320] Caches are synced for configmaps
	I0923 13:57:41.409507       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0923 13:57:41.409808       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0923 13:57:41.409891       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 13:57:41.419325       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 13:57:41.419357       1 policy_source.go:224] refreshing policies
	I0923 13:57:41.426975       1 aggregator.go:171] initial CRD sync complete...
	I0923 13:57:41.427008       1 autoregister_controller.go:144] Starting autoregister controller
	I0923 13:57:41.427016       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0923 13:57:41.427022       1 cache.go:39] Caches are synced for autoregister controller
	I0923 13:57:41.428676       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0923 13:57:41.439050       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 13:57:42.022917       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0923 13:57:42.613857       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0923 13:57:42.615569       1 controller.go:615] quota admission added evaluator for: endpoints
	I0923 13:57:42.636218       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [9738d35f1f4b7e13d574026f634b4a29bd7753f09be593fdf0f320283c6f7090] <==
	W0923 13:56:48.373974       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RoleBinding: etcdserver: request timed out
	E0923 13:56:48.374399       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RoleBinding: failed to list *v1.RoleBinding: etcdserver: request timed out" logger="UnhandledError"
	I0923 13:56:55.275828       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 13:56:55.277379       1 policy_source.go:224] refreshing policies
	I0923 13:56:55.276607       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0923 13:56:55.291598       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 13:56:55.292125       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0923 13:56:55.291703       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0923 13:56:55.291870       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0923 13:56:55.296225       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0923 13:56:55.296331       1 aggregator.go:171] initial CRD sync complete...
	I0923 13:56:55.296366       1 autoregister_controller.go:144] Starting autoregister controller
	I0923 13:56:55.296397       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0923 13:56:55.296428       1 cache.go:39] Caches are synced for autoregister controller
	I0923 13:56:55.296629       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0923 13:56:55.296668       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0923 13:56:55.300587       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0923 13:56:55.321790       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I0923 13:56:55.323319       1 controller.go:615] quota admission added evaluator for: endpoints
	I0923 13:56:55.331974       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 13:56:55.353425       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0923 13:56:55.357611       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0923 13:56:55.364691       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0923 13:56:55.397105       1 shared_informer.go:320] Caches are synced for configmaps
	F0923 13:57:37.296923       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [2d05c57aee66f46a461f56f9d089e3d84b9a13c7985de89de70c4416d326cece] <==
	I0923 13:57:12.750170       1 serving.go:386] Generated self-signed cert in-memory
	I0923 13:57:14.804303       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0923 13:57:14.804335       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:57:14.805822       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0923 13:57:14.805978       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0923 13:57:14.806104       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0923 13:57:14.806180       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0923 13:57:24.829320       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [c3196d892971841661d80e8de75b81dd9fcfd87261f93ad3e399e9bf65ba0819] <==
	I0923 13:57:49.539504       1 shared_informer.go:320] Caches are synced for deployment
	I0923 13:57:49.559106       1 shared_informer.go:320] Caches are synced for persistent volume
	I0923 13:57:49.589786       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0923 13:57:49.589971       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="106.885µs"
	I0923 13:57:49.590090       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="156.386µs"
	I0923 13:57:49.597557       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 13:57:49.602510       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 13:57:50.054469       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 13:57:50.090382       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 13:57:50.090420       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0923 13:58:03.673892       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-952506-m04"
	I0923 13:58:03.673901       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-952506-m04"
	I0923 13:58:03.694208       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-952506-m04"
	I0923 13:58:04.450705       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-952506-m04"
	I0923 13:58:11.836701       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.782µs"
	I0923 13:58:12.985163       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="159.717µs"
	I0923 13:58:14.052570       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="94.554868ms"
	I0923 13:58:14.052756       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.971µs"
	I0923 13:58:15.670034       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-952506-m04"
	I0923 13:58:15.670109       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-952506"
	I0923 13:58:15.689152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-952506"
	I0923 13:58:15.719426       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.273059ms"
	I0923 13:58:15.719716       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.912µs"
	I0923 13:58:19.535529       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-952506"
	I0923 13:58:20.916531       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-952506"
	
	
	==> kube-proxy [aa46d8f88d7a42f281bfdf7ee7213d492cd2414a53215a4ad56be64b47f95c2f] <==
	I0923 13:57:12.717055       1 server_linux.go:66] "Using iptables proxy"
	I0923 13:57:12.816383       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 13:57:12.816469       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 13:57:12.928407       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 13:57:12.928473       1 server_linux.go:169] "Using iptables Proxier"
	I0923 13:57:12.944098       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 13:57:12.944471       1 server.go:483] "Version info" version="v1.31.1"
	I0923 13:57:12.944494       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:57:12.945810       1 config.go:199] "Starting service config controller"
	I0923 13:57:12.945848       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 13:57:12.945874       1 config.go:105] "Starting endpoint slice config controller"
	I0923 13:57:12.945884       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 13:57:12.949086       1 config.go:328] "Starting node config controller"
	I0923 13:57:12.949117       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 13:57:13.051403       1 shared_informer.go:320] Caches are synced for node config
	I0923 13:57:13.051440       1 shared_informer.go:320] Caches are synced for service config
	I0923 13:57:13.051475       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [e7dbdce650f16b6de58b2571b21ad4f49d65bf3a502925a7f5f07c8020a90bcf] <==
	E0923 13:56:51.889968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:56:52.039069       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 13:56:52.039200       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:56:52.736924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 13:56:52.737064       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:56:53.519390       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 13:56:53.519433       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 13:56:54.413634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 13:56:54.413686       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:56:54.553857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 13:56:54.553897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 13:57:12.151354       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0923 13:57:41.355287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:43832->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0923 13:57:41.357815       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:43820->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0923 13:57:41.357972       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:43890->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0923 13:57:41.358870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:43868->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0923 13:57:41.359220       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:43862->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0923 13:57:41.359327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:43856->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0923 13:57:41.359683       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:43796->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0923 13:57:41.360308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:43880->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0923 13:57:41.361036       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:43808->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0923 13:57:41.361182       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:43798->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0923 13:57:41.361336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:43776->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0923 13:57:41.363324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:43790->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0923 13:57:41.363440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:43788->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	
	
	==> kubelet <==
	Sep 23 13:57:31 ha-952506 kubelet[759]: I0923 13:57:31.897959     759 scope.go:117] "RemoveContainer" containerID="2d05c57aee66f46a461f56f9d089e3d84b9a13c7985de89de70c4416d326cece"
	Sep 23 13:57:31 ha-952506 kubelet[759]: E0923 13:57:31.898146     759 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-952506_kube-system(99f98089e10fa303a2bbe8dd9abd7bf7)\"" pod="kube-system/kube-controller-manager-ha-952506" podUID="99f98089e10fa303a2bbe8dd9abd7bf7"
	Sep 23 13:57:34 ha-952506 kubelet[759]: E0923 13:57:34.729788     759 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727099854729565122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:57:34 ha-952506 kubelet[759]: E0923 13:57:34.729837     759 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727099854729565122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:57:37 ha-952506 kubelet[759]: I0923 13:57:37.940243     759 scope.go:117] "RemoveContainer" containerID="9738d35f1f4b7e13d574026f634b4a29bd7753f09be593fdf0f320283c6f7090"
	Sep 23 13:57:37 ha-952506 kubelet[759]: I0923 13:57:37.941357     759 status_manager.go:851] "Failed to get status for pod" podUID="8e87b25f4da88b3f6e77e6c9f569d8cb" pod="kube-system/kube-apiserver-ha-952506" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-952506\": dial tcp 192.168.49.254:8443: connect: connection refused"
	Sep 23 13:57:37 ha-952506 kubelet[759]: E0923 13:57:37.943040     759 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-952506.17f7e41c11379162\": dial tcp 192.168.49.254:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-ha-952506.17f7e41c11379162  kube-system   2569 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-952506,UID:8e87b25f4da88b3f6e77e6c9f569d8cb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.31.1\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-952506,},FirstTimestamp:2024-09-23 13:56:31 +0000 UTC,LastTimestamp:2024-09-23 13:57:37.942570466 +0000 UTC m=+73.411711003,Count:2,Type:Normal,EventTime:0001-01-01 00:00
:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-952506,}"
	Sep 23 13:57:41 ha-952506 kubelet[759]: E0923 13:57:41.167331     759 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:44448->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Sep 23 13:57:41 ha-952506 kubelet[759]: E0923 13:57:41.168092     759 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:44460->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Sep 23 13:57:41 ha-952506 kubelet[759]: E0923 13:57:41.168147     759 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:44480->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Sep 23 13:57:41 ha-952506 kubelet[759]: E0923 13:57:41.168294     759 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:44444->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Sep 23 13:57:41 ha-952506 kubelet[759]: I0923 13:57:41.950139     759 scope.go:117] "RemoveContainer" containerID="6d88787b5e2952089eca6e666c4edfa7ebb730f4820b5dd70e3d1bdae5def4c0"
	Sep 23 13:57:42 ha-952506 kubelet[759]: I0923 13:57:42.953309     759 scope.go:117] "RemoveContainer" containerID="40565961cacec8bf509dbe1011c211d1612c7b2a37e26d4196b9a8d86fe75e0a"
	Sep 23 13:57:44 ha-952506 kubelet[759]: E0923 13:57:44.732042     759 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727099864731808148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:57:44 ha-952506 kubelet[759]: E0923 13:57:44.732074     759 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727099864731808148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:57:46 ha-952506 kubelet[759]: I0923 13:57:46.711032     759 scope.go:117] "RemoveContainer" containerID="2d05c57aee66f46a461f56f9d089e3d84b9a13c7985de89de70c4416d326cece"
	Sep 23 13:57:53 ha-952506 kubelet[759]: E0923 13:57:53.080989     759 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-952506?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Sep 23 13:57:54 ha-952506 kubelet[759]: E0923 13:57:54.733898     759 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727099874733700279,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:57:54 ha-952506 kubelet[759]: E0923 13:57:54.733935     759 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727099874733700279,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:58:03 ha-952506 kubelet[759]: E0923 13:58:03.082104     759 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-952506?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Sep 23 13:58:04 ha-952506 kubelet[759]: E0923 13:58:04.736015     759 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727099884735796595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:58:04 ha-952506 kubelet[759]: E0923 13:58:04.736050     759 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727099884735796595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:58:13 ha-952506 kubelet[759]: E0923 13:58:13.083048     759 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-952506?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Sep 23 13:58:14 ha-952506 kubelet[759]: E0923 13:58:14.737899     759 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727099894737669184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 13:58:14 ha-952506 kubelet[759]: E0923 13:58:14.737939     759 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727099894737669184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-952506 -n ha-952506
helpers_test.go:261: (dbg) Run:  kubectl --context ha-952506 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (126.42s)

                                                
                                    

Test pass (294/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.9
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 6.27
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 202.9
31 TestAddons/serial/GCPAuth/Namespaces 0.22
35 TestAddons/parallel/InspektorGadget 11.78
38 TestAddons/parallel/CSI 57.81
39 TestAddons/parallel/Headlamp 17.73
40 TestAddons/parallel/CloudSpanner 6.78
41 TestAddons/parallel/LocalPath 53.49
42 TestAddons/parallel/NvidiaDevicePlugin 6.53
43 TestAddons/parallel/Yakd 11.76
44 TestAddons/StoppedEnableDisable 6.2
45 TestCertOptions 38.56
46 TestCertExpiration 240.75
48 TestForceSystemdFlag 41.29
49 TestForceSystemdEnv 38.89
55 TestErrorSpam/setup 29.38
56 TestErrorSpam/start 0.75
57 TestErrorSpam/status 1
58 TestErrorSpam/pause 1.81
59 TestErrorSpam/unpause 1.89
60 TestErrorSpam/stop 1.46
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 81.48
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 29.15
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.09
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.28
72 TestFunctional/serial/CacheCmd/cache/add_local 1.43
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
76 TestFunctional/serial/CacheCmd/cache/cache_reload 2.15
77 TestFunctional/serial/CacheCmd/cache/delete 0.16
78 TestFunctional/serial/MinikubeKubectlCmd 0.15
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
80 TestFunctional/serial/ExtraConfig 34.51
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.71
83 TestFunctional/serial/LogsFileCmd 1.71
84 TestFunctional/serial/InvalidService 4.14
86 TestFunctional/parallel/ConfigCmd 0.44
87 TestFunctional/parallel/DashboardCmd 16.85
88 TestFunctional/parallel/DryRun 0.43
89 TestFunctional/parallel/InternationalLanguage 0.17
90 TestFunctional/parallel/StatusCmd 1.02
94 TestFunctional/parallel/ServiceCmdConnect 11.93
95 TestFunctional/parallel/AddonsCmd 0.24
96 TestFunctional/parallel/PersistentVolumeClaim 25.5
98 TestFunctional/parallel/SSHCmd 0.68
99 TestFunctional/parallel/CpCmd 2.15
101 TestFunctional/parallel/FileSync 0.32
102 TestFunctional/parallel/CertSync 1.99
106 TestFunctional/parallel/NodeLabels 0.11
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.8
110 TestFunctional/parallel/License 0.25
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.7
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.45
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.18
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
124 TestFunctional/parallel/ProfileCmd/profile_list 0.39
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
126 TestFunctional/parallel/MountCmd/any-port 9.55
127 TestFunctional/parallel/ServiceCmd/List 0.65
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
130 TestFunctional/parallel/ServiceCmd/Format 0.5
131 TestFunctional/parallel/ServiceCmd/URL 0.42
132 TestFunctional/parallel/MountCmd/specific-port 2.24
133 TestFunctional/parallel/MountCmd/VerifyCleanup 1.76
134 TestFunctional/parallel/Version/short 0.07
135 TestFunctional/parallel/Version/components 1.19
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.46
141 TestFunctional/parallel/ImageCommands/Setup 0.71
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.39
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.91
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.15
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.07
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.69
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
152 TestFunctional/delete_echo-server_images 0.03
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 172.03
159 TestMultiControlPlane/serial/DeployApp 8.29
160 TestMultiControlPlane/serial/PingHostFromPods 1.59
161 TestMultiControlPlane/serial/AddWorkerNode 31.66
162 TestMultiControlPlane/serial/NodeLabels 0.11
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.97
164 TestMultiControlPlane/serial/CopyFile 18.1
165 TestMultiControlPlane/serial/StopSecondaryNode 12.69
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.76
167 TestMultiControlPlane/serial/RestartSecondaryNode 23.08
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.7
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 195.99
170 TestMultiControlPlane/serial/DeleteSecondaryNode 12.45
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.72
172 TestMultiControlPlane/serial/StopCluster 35.75
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.72
175 TestMultiControlPlane/serial/AddSecondaryNode 72.75
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.96
180 TestJSONOutput/start/Command 77.99
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.75
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.66
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.85
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.26
205 TestKicCustomNetwork/create_custom_network 42.04
206 TestKicCustomNetwork/use_default_bridge_network 33.15
207 TestKicExistingNetwork 35.6
208 TestKicCustomSubnet 31.81
209 TestKicStaticIP 34.02
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 70.15
214 TestMountStart/serial/StartWithMountFirst 6.47
215 TestMountStart/serial/VerifyMountFirst 0.26
216 TestMountStart/serial/StartWithMountSecond 7.02
217 TestMountStart/serial/VerifyMountSecond 0.26
218 TestMountStart/serial/DeleteFirst 1.62
219 TestMountStart/serial/VerifyMountPostDelete 0.25
220 TestMountStart/serial/Stop 1.2
221 TestMountStart/serial/RestartStopped 8.43
222 TestMountStart/serial/VerifyMountPostStop 0.25
225 TestMultiNode/serial/FreshStart2Nodes 106.65
226 TestMultiNode/serial/DeployApp2Nodes 7.4
227 TestMultiNode/serial/PingHostFrom2Pods 0.98
228 TestMultiNode/serial/AddNode 58.41
229 TestMultiNode/serial/MultiNodeLabels 0.1
230 TestMultiNode/serial/ProfileList 0.65
231 TestMultiNode/serial/CopyFile 9.6
232 TestMultiNode/serial/StopNode 2.21
233 TestMultiNode/serial/StartAfterStop 9.76
234 TestMultiNode/serial/RestartKeepsNodes 103.06
235 TestMultiNode/serial/DeleteNode 5.46
236 TestMultiNode/serial/StopMultiNode 23.91
237 TestMultiNode/serial/RestartMultiNode 51.81
238 TestMultiNode/serial/ValidateNameConflict 34
243 TestPreload 125.93
245 TestScheduledStopUnix 105.08
248 TestInsufficientStorage 10.3
249 TestRunningBinaryUpgrade 90.31
251 TestKubernetesUpgrade 405.83
252 TestMissingContainerUpgrade 168.67
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
255 TestNoKubernetes/serial/StartWithK8s 38.22
256 TestNoKubernetes/serial/StartWithStopK8s 30.92
257 TestNoKubernetes/serial/Start 10.34
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
259 TestNoKubernetes/serial/ProfileList 4.37
260 TestNoKubernetes/serial/Stop 1.27
261 TestNoKubernetes/serial/StartNoArgs 7.01
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
263 TestStoppedBinaryUpgrade/Setup 0.9
264 TestStoppedBinaryUpgrade/Upgrade 90.11
265 TestStoppedBinaryUpgrade/MinikubeLogs 1.1
274 TestPause/serial/Start 81.31
275 TestPause/serial/SecondStartNoReconfiguration 30.65
276 TestPause/serial/Pause 0.73
277 TestPause/serial/VerifyStatus 0.31
278 TestPause/serial/Unpause 0.66
279 TestPause/serial/PauseAgain 0.89
280 TestPause/serial/DeletePaused 2.65
281 TestPause/serial/VerifyDeletedResources 12.85
289 TestNetworkPlugins/group/false 4.31
294 TestStartStop/group/old-k8s-version/serial/FirstStart 184.9
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 80.81
297 TestStartStop/group/old-k8s-version/serial/DeployApp 12.89
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.52
299 TestStartStop/group/old-k8s-version/serial/Stop 12.4
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
301 TestStartStop/group/old-k8s-version/serial/SecondStart 135.69
302 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.49
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
304 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.97
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 266.59
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
310 TestStartStop/group/old-k8s-version/serial/Pause 2.93
312 TestStartStop/group/embed-certs/serial/FirstStart 74.16
313 TestStartStop/group/embed-certs/serial/DeployApp 9.35
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.14
315 TestStartStop/group/embed-certs/serial/Stop 11.97
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
317 TestStartStop/group/embed-certs/serial/SecondStart 277.62
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.95
323 TestStartStop/group/no-preload/serial/FirstStart 61.36
324 TestStartStop/group/no-preload/serial/DeployApp 10.36
325 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
326 TestStartStop/group/no-preload/serial/Stop 12.01
327 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
328 TestStartStop/group/no-preload/serial/SecondStart 289.29
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
332 TestStartStop/group/embed-certs/serial/Pause 2.97
334 TestStartStop/group/newest-cni/serial/FirstStart 36.66
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.05
337 TestStartStop/group/newest-cni/serial/Stop 1.26
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
339 TestStartStop/group/newest-cni/serial/SecondStart 15.84
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
343 TestStartStop/group/newest-cni/serial/Pause 3.07
344 TestNetworkPlugins/group/auto/Start 80.91
345 TestNetworkPlugins/group/auto/KubeletFlags 0.28
346 TestNetworkPlugins/group/auto/NetCatPod 10.31
347 TestNetworkPlugins/group/auto/DNS 0.18
348 TestNetworkPlugins/group/auto/Localhost 0.16
349 TestNetworkPlugins/group/auto/HairPin 0.16
350 TestNetworkPlugins/group/kindnet/Start 79.68
351 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
352 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
353 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.33
354 TestStartStop/group/no-preload/serial/Pause 4.14
355 TestNetworkPlugins/group/calico/Start 62.48
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
358 TestNetworkPlugins/group/kindnet/NetCatPod 14.25
359 TestNetworkPlugins/group/calico/ControllerPod 6.01
360 TestNetworkPlugins/group/calico/KubeletFlags 0.3
361 TestNetworkPlugins/group/calico/NetCatPod 11.28
362 TestNetworkPlugins/group/kindnet/DNS 0.26
363 TestNetworkPlugins/group/kindnet/Localhost 0.15
364 TestNetworkPlugins/group/kindnet/HairPin 0.17
365 TestNetworkPlugins/group/calico/DNS 0.25
366 TestNetworkPlugins/group/calico/Localhost 0.23
367 TestNetworkPlugins/group/calico/HairPin 0.27
368 TestNetworkPlugins/group/custom-flannel/Start 55.56
369 TestNetworkPlugins/group/enable-default-cni/Start 50.9
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.27
372 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
373 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.34
374 TestNetworkPlugins/group/custom-flannel/DNS 0.17
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
377 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
378 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
379 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
380 TestNetworkPlugins/group/flannel/Start 66.51
381 TestNetworkPlugins/group/bridge/Start 87.11
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
384 TestNetworkPlugins/group/flannel/NetCatPod 13.29
385 TestNetworkPlugins/group/flannel/DNS 0.18
386 TestNetworkPlugins/group/flannel/Localhost 0.15
387 TestNetworkPlugins/group/flannel/HairPin 0.16
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
389 TestNetworkPlugins/group/bridge/NetCatPod 11.27
390 TestNetworkPlugins/group/bridge/DNS 0.22
391 TestNetworkPlugins/group/bridge/Localhost 0.25
392 TestNetworkPlugins/group/bridge/HairPin 0.19
x
+
TestDownloadOnly/v1.20.0/json-events (6.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-801108 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-801108 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.89559871s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0923 13:24:31.842804 2383070 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0923 13:24:31.842892 2383070 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-801108
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-801108: exit status 85 (62.759694ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-801108 | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC |          |
	|         | -p download-only-801108        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:24:24
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:24:24.992622 2383076 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:24:24.992850 2383076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:24:24.992877 2383076 out.go:358] Setting ErrFile to fd 2...
	I0923 13:24:24.992896 2383076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:24:24.993146 2383076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-2377681/.minikube/bin
	W0923 13:24:24.993316 2383076 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19690-2377681/.minikube/config/config.json: open /home/jenkins/minikube-integration/19690-2377681/.minikube/config/config.json: no such file or directory
	I0923 13:24:24.993771 2383076 out.go:352] Setting JSON to true
	I0923 13:24:24.994707 2383076 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":54408,"bootTime":1727043457,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0923 13:24:24.994808 2383076 start.go:139] virtualization:  
	I0923 13:24:24.997360 2383076 out.go:97] [download-only-801108] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0923 13:24:24.997562 2383076 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 13:24:24.997610 2383076 notify.go:220] Checking for updates...
	I0923 13:24:24.998816 2383076 out.go:169] MINIKUBE_LOCATION=19690
	I0923 13:24:25.000165 2383076 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:24:25.001740 2383076 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19690-2377681/kubeconfig
	I0923 13:24:25.003590 2383076 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-2377681/.minikube
	I0923 13:24:25.004819 2383076 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0923 13:24:25.007466 2383076 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 13:24:25.007815 2383076 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:24:25.038461 2383076 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 13:24:25.038576 2383076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:24:25.096141 2383076 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 13:24:25.086351221 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:24:25.096265 2383076 docker.go:318] overlay module found
	I0923 13:24:25.097823 2383076 out.go:97] Using the docker driver based on user configuration
	I0923 13:24:25.097857 2383076 start.go:297] selected driver: docker
	I0923 13:24:25.097865 2383076 start.go:901] validating driver "docker" against <nil>
	I0923 13:24:25.097969 2383076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:24:25.148132 2383076 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 13:24:25.138937682 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:24:25.148351 2383076 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 13:24:25.148648 2383076 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0923 13:24:25.148854 2383076 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 13:24:25.150719 2383076 out.go:169] Using Docker driver with root privileges
	I0923 13:24:25.152417 2383076 cni.go:84] Creating CNI manager for ""
	I0923 13:24:25.152482 2383076 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 13:24:25.152501 2383076 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 13:24:25.152593 2383076 start.go:340] cluster config:
	{Name:download-only-801108 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-801108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:24:25.153944 2383076 out.go:97] Starting "download-only-801108" primary control-plane node in "download-only-801108" cluster
	I0923 13:24:25.153967 2383076 cache.go:121] Beginning downloading kic base image for docker with crio
	I0923 13:24:25.155354 2383076 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0923 13:24:25.155385 2383076 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0923 13:24:25.155496 2383076 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 13:24:25.172598 2383076 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 13:24:25.172833 2383076 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 13:24:25.172964 2383076 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 13:24:25.235697 2383076 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0923 13:24:25.235725 2383076 cache.go:56] Caching tarball of preloaded images
	I0923 13:24:25.235882 2383076 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0923 13:24:25.237769 2383076 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 13:24:25.237797 2383076 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0923 13:24:25.326509 2383076 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0923 13:24:29.507881 2383076 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	
	
	* The control-plane node download-only-801108 host does not exist
	  To start a cluster, run: "minikube start -p download-only-801108"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-801108
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-496865 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-496865 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.273889617s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0923 13:24:38.510709 2383070 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0923 13:24:38.510749 2383070 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-496865
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-496865: exit status 85 (69.263828ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-801108 | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC |                     |
	|         | -p download-only-801108        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	| delete  | -p download-only-801108        | download-only-801108 | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	| start   | -o=json --download-only        | download-only-496865 | jenkins | v1.34.0 | 23 Sep 24 13:24 UTC |                     |
	|         | -p download-only-496865        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:24:32
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:24:32.278661 2383276 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:24:32.278820 2383276 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:24:32.278832 2383276 out.go:358] Setting ErrFile to fd 2...
	I0923 13:24:32.278838 2383276 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:24:32.279084 2383276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-2377681/.minikube/bin
	I0923 13:24:32.279498 2383276 out.go:352] Setting JSON to true
	I0923 13:24:32.280392 2383276 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":54415,"bootTime":1727043457,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0923 13:24:32.280464 2383276 start.go:139] virtualization:  
	I0923 13:24:32.282767 2383276 out.go:97] [download-only-496865] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 13:24:32.282972 2383276 notify.go:220] Checking for updates...
	I0923 13:24:32.284477 2383276 out.go:169] MINIKUBE_LOCATION=19690
	I0923 13:24:32.286447 2383276 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:24:32.288204 2383276 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19690-2377681/kubeconfig
	I0923 13:24:32.289593 2383276 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-2377681/.minikube
	I0923 13:24:32.290919 2383276 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0923 13:24:32.293693 2383276 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 13:24:32.293942 2383276 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:24:32.319900 2383276 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 13:24:32.320063 2383276 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:24:32.378165 2383276 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 13:24:32.36843974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:24:32.378279 2383276 docker.go:318] overlay module found
	I0923 13:24:32.379870 2383276 out.go:97] Using the docker driver based on user configuration
	I0923 13:24:32.379895 2383276 start.go:297] selected driver: docker
	I0923 13:24:32.379902 2383276 start.go:901] validating driver "docker" against <nil>
	I0923 13:24:32.380004 2383276 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:24:32.431990 2383276 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 13:24:32.422554706 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:24:32.432202 2383276 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 13:24:32.432501 2383276 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0923 13:24:32.432668 2383276 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 13:24:32.434489 2383276 out.go:169] Using Docker driver with root privileges
	I0923 13:24:32.435525 2383276 cni.go:84] Creating CNI manager for ""
	I0923 13:24:32.435587 2383276 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 13:24:32.435600 2383276 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 13:24:32.435686 2383276 start.go:340] cluster config:
	{Name:download-only-496865 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-496865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:24:32.437371 2383276 out.go:97] Starting "download-only-496865" primary control-plane node in "download-only-496865" cluster
	I0923 13:24:32.437389 2383276 cache.go:121] Beginning downloading kic base image for docker with crio
	I0923 13:24:32.438412 2383276 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0923 13:24:32.438436 2383276 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:24:32.438612 2383276 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 13:24:32.453427 2383276 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 13:24:32.453560 2383276 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 13:24:32.453585 2383276 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 13:24:32.453590 2383276 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 13:24:32.453600 2383276 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 13:24:32.495942 2383276 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0923 13:24:32.495970 2383276 cache.go:56] Caching tarball of preloaded images
	I0923 13:24:32.496138 2383276 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 13:24:32.497950 2383276 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0923 13:24:32.497971 2383276 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I0923 13:24:32.580462 2383276 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:8285fc512c7462f100de137f91fcd0ae -> /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0923 13:24:37.017255 2383276 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I0923 13:24:37.017397 2383276 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19690-2377681/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-496865 host does not exist
	  To start a cluster, run: "minikube start -p download-only-496865"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-496865
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 13:24:39.698579 2383070 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-127301 --alsologtostderr --binary-mirror http://127.0.0.1:42465 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-127301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-127301
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-133262
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-133262: exit status 85 (64.090822ms)

                                                
                                                
-- stdout --
	* Profile "addons-133262" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-133262"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-133262
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-133262: exit status 85 (65.363029ms)

                                                
                                                
-- stdout --
	* Profile "addons-133262" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-133262"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (202.9s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-133262 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-133262 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m22.902640883s)
--- PASS: TestAddons/Setup (202.90s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-133262 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-133262 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.22s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ncv7d" [d19e7839-1016-4e61-ba5e-28b2f0a6c2eb] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004410651s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-133262
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-133262: (5.771350633s)
--- PASS: TestAddons/parallel/InspektorGadget (11.78s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0923 13:37:25.492654 2383070 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 13:37:25.512157 2383070 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 13:37:25.512468 2383070 kapi.go:107] duration metric: took 19.822945ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 19.844581ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-133262 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-133262 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [48dffd6d-3f4a-414e-9085-6ac80d055028] Pending
helpers_test.go:344: "task-pv-pod" [48dffd6d-3f4a-414e-9085-6ac80d055028] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [48dffd6d-3f4a-414e-9085-6ac80d055028] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003435154s
addons_test.go:528: (dbg) Run:  kubectl --context addons-133262 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-133262 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-133262 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-133262 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-133262 delete pod task-pv-pod: (1.007367155s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-133262 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-133262 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-133262 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e08575e2-f352-4ca3-b182-d11843b72507] Pending
helpers_test.go:344: "task-pv-pod-restore" [e08575e2-f352-4ca3-b182-d11843b72507] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e08575e2-f352-4ca3-b182-d11843b72507] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004751875s
addons_test.go:570: (dbg) Run:  kubectl --context addons-133262 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-133262 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-133262 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-133262 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-133262 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.715660925s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-133262 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.81s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-133262 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-hxf5t" [e73050f1-82e6-45e8-add4-f40324f7307a] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-hxf5t" [e73050f1-82e6-45e8-add4-f40324f7307a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-hxf5t" [e73050f1-82e6-45e8-add4-f40324f7307a] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004000768s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-133262 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-133262 addons disable headlamp --alsologtostderr -v=1: (5.791623564s)
--- PASS: TestAddons/parallel/Headlamp (17.73s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-qsshz" [4a04108e-baee-4cff-80b6-9c420e9913a9] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0051749s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-133262
--- PASS: TestAddons/parallel/CloudSpanner (6.78s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.49s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-133262 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-133262 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-133262 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d8278108-9f8c-4e03-a6ab-990aae96c025] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d8278108-9f8c-4e03-a6ab-990aae96c025] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d8278108-9f8c-4e03-a6ab-990aae96c025] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004527044s
addons_test.go:938: (dbg) Run:  kubectl --context addons-133262 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-133262 ssh "cat /opt/local-path-provisioner/pvc-ba93c3ca-4ceb-4c2d-8d75-76b896b20b5e_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-133262 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-133262 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-133262 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-133262 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.363365826s)
--- PASS: TestAddons/parallel/LocalPath (53.49s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4m26g" [c0e73bf1-5273-4a14-9517-202ce22276b8] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00420427s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-133262
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-8drqn" [674af74b-1516-4be5-883b-d26a527903b2] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003877805s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-133262 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-133262 addons disable yakd --alsologtostderr -v=1: (5.751365266s)
--- PASS: TestAddons/parallel/Yakd (11.76s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-133262
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-133262: (5.930422677s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-133262
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-133262
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-133262
--- PASS: TestAddons/StoppedEnableDisable (6.20s)

                                                
                                    
x
+
TestCertOptions (38.56s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-768279 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-768279 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.90058557s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-768279 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-768279 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-768279 -- "sudo cat /etc/kubernetes/admin.conf"
E0923 14:26:44.466651 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:175: Cleaning up "cert-options-768279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-768279
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-768279: (1.987086737s)
--- PASS: TestCertOptions (38.56s)

                                                
                                    
x
+
TestCertExpiration (240.75s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-556907 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-556907 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.09531914s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-556907 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-556907 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (18.232931118s)
helpers_test.go:175: Cleaning up "cert-expiration-556907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-556907
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-556907: (2.410382841s)
--- PASS: TestCertExpiration (240.75s)

                                                
                                    
x
+
TestForceSystemdFlag (41.29s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-380959 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-380959 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.804802649s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-380959 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-380959" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-380959
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-380959: (2.947890332s)
--- PASS: TestForceSystemdFlag (41.29s)

                                                
                                    
x
+
TestForceSystemdEnv (38.89s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-520171 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-520171 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.16030346s)
helpers_test.go:175: Cleaning up "force-systemd-env-520171" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-520171
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-520171: (2.725140612s)
--- PASS: TestForceSystemdEnv (38.89s)

                                                
                                    
x
+
TestErrorSpam/setup (29.38s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-037526 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-037526 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-037526 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-037526 --driver=docker  --container-runtime=crio: (29.380064464s)
--- PASS: TestErrorSpam/setup (29.38s)

                                                
                                    
x
+
TestErrorSpam/start (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-037526 --log_dir /tmp/nospam-037526 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-037526 --log_dir /tmp/nospam-037526 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-037526 --log_dir /tmp/nospam-037526 start --dry-run
--- PASS: TestErrorSpam/start (0.75s)

                                                
                                    
x
+
TestErrorSpam/status (1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-037526 --log_dir /tmp/nospam-037526 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-037526 --log_dir /tmp/nospam-037526 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-037526 --log_dir /tmp/nospam-037526 status
--- PASS: TestErrorSpam/status (1.00s)

                                                
                                    
x
+
TestErrorSpam/pause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-037526 --log_dir /tmp/nospam-037526 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-037526 --log_dir /tmp/nospam-037526 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-037526 --log_dir /tmp/nospam-037526 pause
--- PASS: TestErrorSpam/pause (1.81s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-037526 --log_dir /tmp/nospam-037526 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-037526 --log_dir /tmp/nospam-037526 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-037526 --log_dir /tmp/nospam-037526 unpause
--- PASS: TestErrorSpam/unpause (1.89s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-037526 --log_dir /tmp/nospam-037526 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-037526 --log_dir /tmp/nospam-037526 stop: (1.259929201s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-037526 --log_dir /tmp/nospam-037526 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-037526 --log_dir /tmp/nospam-037526 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19690-2377681/.minikube/files/etc/test/nested/copy/2383070/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.48s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-085557 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-085557 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m21.476932946s)
--- PASS: TestFunctional/serial/StartWithProxy (81.48s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.15s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 13:45:22.436806 2383070 config.go:182] Loaded profile config "functional-085557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-085557 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-085557 --alsologtostderr -v=8: (29.147300553s)
functional_test.go:663: soft start took 29.147810178s for "functional-085557" cluster.
I0923 13:45:51.584410 2383070 config.go:182] Loaded profile config "functional-085557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (29.15s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-085557 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-085557 cache add registry.k8s.io/pause:3.1: (1.386184525s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-085557 cache add registry.k8s.io/pause:3.3: (1.557296624s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-085557 cache add registry.k8s.io/pause:latest: (1.331427984s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-085557 /tmp/TestFunctionalserialCacheCmdcacheadd_local3974636536/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 cache add minikube-local-cache-test:functional-085557
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 cache delete minikube-local-cache-test:functional-085557
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-085557
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-085557 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (279.757461ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-085557 cache reload: (1.226030073s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 kubectl -- --context functional-085557 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-085557 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.51s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-085557 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-085557 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.50518846s)
functional_test.go:761: restart took 34.505279641s for "functional-085557" cluster.
I0923 13:46:34.967203 2383070 config.go:182] Loaded profile config "functional-085557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (34.51s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-085557 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-085557 logs: (1.713142533s)
--- PASS: TestFunctional/serial/LogsCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 logs --file /tmp/TestFunctionalserialLogsFileCmd1424416752/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-085557 logs --file /tmp/TestFunctionalserialLogsFileCmd1424416752/001/logs.txt: (1.712046703s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.14s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-085557 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-085557
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-085557: exit status 115 (532.046802ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32160 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-085557 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-085557 config get cpus: exit status 14 (80.213382ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-085557 config get cpus: exit status 14 (76.359043ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-085557 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-085557 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2411008: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.85s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-085557 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-085557 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (175.835054ms)

                                                
                                                
-- stdout --
	* [functional-085557] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-2377681/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-2377681/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:47:15.305691 2410768 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:47:15.305890 2410768 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:47:15.305922 2410768 out.go:358] Setting ErrFile to fd 2...
	I0923 13:47:15.305945 2410768 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:47:15.306206 2410768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-2377681/.minikube/bin
	I0923 13:47:15.306678 2410768 out.go:352] Setting JSON to false
	I0923 13:47:15.307734 2410768 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":55778,"bootTime":1727043457,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0923 13:47:15.307831 2410768 start.go:139] virtualization:  
	I0923 13:47:15.314399 2410768 out.go:177] * [functional-085557] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 13:47:15.317688 2410768 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:47:15.317788 2410768 notify.go:220] Checking for updates...
	I0923 13:47:15.322422 2410768 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:47:15.324879 2410768 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-2377681/kubeconfig
	I0923 13:47:15.327489 2410768 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-2377681/.minikube
	I0923 13:47:15.329979 2410768 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 13:47:15.332461 2410768 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:47:15.335404 2410768 config.go:182] Loaded profile config "functional-085557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:47:15.336023 2410768 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:47:15.361731 2410768 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 13:47:15.361867 2410768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:47:15.415252 2410768 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 13:47:15.405025367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:47:15.415368 2410768 docker.go:318] overlay module found
	I0923 13:47:15.418203 2410768 out.go:177] * Using the docker driver based on existing profile
	I0923 13:47:15.420800 2410768 start.go:297] selected driver: docker
	I0923 13:47:15.420826 2410768 start.go:901] validating driver "docker" against &{Name:functional-085557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-085557 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:47:15.420955 2410768 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:47:15.424154 2410768 out.go:201] 
	W0923 13:47:15.426810 2410768 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 13:47:15.429708 2410768 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-085557 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-085557 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-085557 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (168.766155ms)

                                                
                                                
-- stdout --
	* [functional-085557] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-2377681/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-2377681/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:47:15.147788 2410723 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:47:15.147981 2410723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:47:15.147997 2410723 out.go:358] Setting ErrFile to fd 2...
	I0923 13:47:15.148004 2410723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:47:15.148404 2410723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-2377681/.minikube/bin
	I0923 13:47:15.148842 2410723 out.go:352] Setting JSON to false
	I0923 13:47:15.149906 2410723 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":55778,"bootTime":1727043457,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0923 13:47:15.150009 2410723 start.go:139] virtualization:  
	I0923 13:47:15.152234 2410723 out.go:177] * [functional-085557] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0923 13:47:15.154208 2410723 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:47:15.154345 2410723 notify.go:220] Checking for updates...
	I0923 13:47:15.156734 2410723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:47:15.158145 2410723 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-2377681/kubeconfig
	I0923 13:47:15.159291 2410723 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-2377681/.minikube
	I0923 13:47:15.160810 2410723 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 13:47:15.162194 2410723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:47:15.164402 2410723 config.go:182] Loaded profile config "functional-085557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:47:15.164944 2410723 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:47:15.193040 2410723 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 13:47:15.193171 2410723 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:47:15.243438 2410723 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 13:47:15.23298547 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:47:15.243561 2410723 docker.go:318] overlay module found
	I0923 13:47:15.245368 2410723 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0923 13:47:15.247066 2410723 start.go:297] selected driver: docker
	I0923 13:47:15.247085 2410723 start.go:901] validating driver "docker" against &{Name:functional-085557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-085557 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:47:15.247196 2410723 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:47:15.249525 2410723 out.go:201] 
	W0923 13:47:15.251561 2410723 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 13:47:15.253693 2410723 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-085557 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-085557 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-26mvr" [ee7006c9-5925-4b25-a28c-411fcb7e4da2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-26mvr" [ee7006c9-5925-4b25-a28c-411fcb7e4da2] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.005867171s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31372
functional_test.go:1675: http://192.168.49.2:31372: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-26mvr

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31372
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.93s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [35e4e409-f111-4507-9d7b-cead557c5481] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003715022s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-085557 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-085557 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-085557 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-085557 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [342f6318-67ac-4dd2-b9e1-4e61203c2abc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [342f6318-67ac-4dd2-b9e1-4e61203c2abc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004012684s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-085557 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-085557 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-085557 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [80c055c6-4e55-4854-82a1-5d0166f1ba4a] Pending
helpers_test.go:344: "sp-pod" [80c055c6-4e55-4854-82a1-5d0166f1ba4a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004903106s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-085557 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.50s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh -n functional-085557 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 cp functional-085557:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3469061778/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh -n functional-085557 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh -n functional-085557 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/2383070/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh "sudo cat /etc/test/nested/copy/2383070/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/2383070.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh "sudo cat /etc/ssl/certs/2383070.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/2383070.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh "sudo cat /usr/share/ca-certificates/2383070.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/23830702.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh "sudo cat /etc/ssl/certs/23830702.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/23830702.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh "sudo cat /usr/share/ca-certificates/23830702.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-085557 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-085557 ssh "sudo systemctl is-active docker": exit status 1 (400.61376ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-085557 ssh "sudo systemctl is-active containerd": exit status 1 (400.295237ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-085557 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-085557 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-085557 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-085557 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2408665: os: process already finished
helpers_test.go:508: unable to kill pid 2408476: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-085557 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-085557 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [4eb7423e-fd9c-4733-bf44-e38971169475] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [4eb7423e-fd9c-4733-bf44-e38971169475] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004476162s
I0923 13:46:52.915902 2383070 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-085557 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.59.99 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-085557 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-085557 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-085557 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-4kdf6" [2591e211-6e70-46a4-8f36-a310f1c38f19] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-4kdf6" [2591e211-6e70-46a4-8f36-a310f1c38f19] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.005870537s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "338.218288ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "56.552971ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "330.071287ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "55.072491ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-085557 /tmp/TestFunctionalparallelMountCmdany-port1566871783/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727099231498448174" to /tmp/TestFunctionalparallelMountCmdany-port1566871783/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727099231498448174" to /tmp/TestFunctionalparallelMountCmdany-port1566871783/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727099231498448174" to /tmp/TestFunctionalparallelMountCmdany-port1566871783/001/test-1727099231498448174
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-085557 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (326.71616ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 13:47:11.825435 2383070 retry.go:31] will retry after 679.485012ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 23 13:47 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 23 13:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 23 13:47 test-1727099231498448174
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh cat /mount-9p/test-1727099231498448174
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-085557 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e598f9f7-1518-42e0-bf3b-3bb7c1ad6d43] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e598f9f7-1518-42e0-bf3b-3bb7c1ad6d43] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e598f9f7-1518-42e0-bf3b-3bb7c1ad6d43] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.008075184s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-085557 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-085557 /tmp/TestFunctionalparallelMountCmdany-port1566871783/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 service list -o json
functional_test.go:1494: Took "500.634502ms" to run "out/minikube-linux-arm64 -p functional-085557 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31375
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31375
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-085557 /tmp/TestFunctionalparallelMountCmdspecific-port2809804737/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-085557 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (401.925324ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 13:47:21.449435 2383070 retry.go:31] will retry after 518.050196ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-085557 /tmp/TestFunctionalparallelMountCmdspecific-port2809804737/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-085557 ssh "sudo umount -f /mount-9p": exit status 1 (393.07311ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-085557 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-085557 /tmp/TestFunctionalparallelMountCmdspecific-port2809804737/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-085557 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1771959219/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-085557 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1771959219/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-085557 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1771959219/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-085557 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-085557 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1771959219/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-085557 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1771959219/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-085557 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1771959219/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-085557 version -o=json --components: (1.193031087s)
--- PASS: TestFunctional/parallel/Version/components (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-085557 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-085557
localhost/kicbase/echo-server:functional-085557
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-085557 image ls --format short --alsologtostderr:
I0923 13:47:34.912277 2413490 out.go:345] Setting OutFile to fd 1 ...
I0923 13:47:34.912521 2413490 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:47:34.912551 2413490 out.go:358] Setting ErrFile to fd 2...
I0923 13:47:34.912569 2413490 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:47:34.912844 2413490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-2377681/.minikube/bin
I0923 13:47:34.913528 2413490 config.go:182] Loaded profile config "functional-085557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 13:47:34.913721 2413490 config.go:182] Loaded profile config "functional-085557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 13:47:34.914241 2413490 cli_runner.go:164] Run: docker container inspect functional-085557 --format={{.State.Status}}
I0923 13:47:34.937383 2413490 ssh_runner.go:195] Run: systemctl --version
I0923 13:47:34.937439 2413490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-085557
I0923 13:47:34.972805 2413490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35744 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/functional-085557/id_rsa Username:docker}
I0923 13:47:35.071152 2413490 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-085557 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| docker.io/library/nginx                 | alpine             | b887aca7aed61 | 48.4MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/kicbase/echo-server           | functional-085557  | ce2d2cda2d858 | 4.79MB |
| localhost/minikube-local-cache-test     | functional-085557  | c04f5fa3cf53c | 3.33kB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 279f381cb3736 | 86.9MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 7f8aa378bb47d | 67MB   |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/library/nginx                 | latest             | 195245f0c7927 | 197MB  |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/kube-apiserver          | v1.31.1            | d3f53a98c0a9d | 92.6MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 24a140c548c07 | 96MB   |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-085557 image ls --format table --alsologtostderr:
I0923 13:47:35.349249 2413609 out.go:345] Setting OutFile to fd 1 ...
I0923 13:47:35.349378 2413609 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:47:35.349383 2413609 out.go:358] Setting ErrFile to fd 2...
I0923 13:47:35.349388 2413609 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:47:35.349707 2413609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-2377681/.minikube/bin
I0923 13:47:35.350745 2413609 config.go:182] Loaded profile config "functional-085557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 13:47:35.350896 2413609 config.go:182] Loaded profile config "functional-085557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 13:47:35.351580 2413609 cli_runner.go:164] Run: docker container inspect functional-085557 --format={{.State.Status}}
I0923 13:47:35.376391 2413609 ssh_runner.go:195] Run: systemctl --version
I0923 13:47:35.376445 2413609 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-085557
I0923 13:47:35.397765 2413609 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35744 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/functional-085557/id_rsa Username:docker}
I0923 13:47:35.503082 2413609 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-085557 image ls --format json --alsologtostderr:
[{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:9f661996f4d1cea788f329b8145660a1124a5a94eec8cea1dba0d564423ad171"],"repoTags":["docker.io/library/nginx:latest"],"size":"197172029"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"
d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb","registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"92632544"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5db
f1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-085557"],"size":"4788229"},{"id":"c04f5fa3cf53ca3c37306b9f8df96b9bcad6b64b24309f82aab3573f445e6efd","repoDigests":["localhost/minikube-local-cache-test@sha256:9496dd442b428c098580a837e0f4a7c2867c879d350ad306a324bb31bf28b2a4"],"repoTags":["localhost/minikube-local-cache-test:functional-085557"],"size":"3328"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6521220
9347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690","registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67007814"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-mini
kube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304
a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53","docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"48375489"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0c
aca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"86930758"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"95951255"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-085557 image ls --format json --alsologtostderr:
I0923 13:47:35.197864 2413574 out.go:345] Setting OutFile to fd 1 ...
I0923 13:47:35.198024 2413574 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:47:35.198037 2413574 out.go:358] Setting ErrFile to fd 2...
I0923 13:47:35.198043 2413574 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:47:35.198297 2413574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-2377681/.minikube/bin
I0923 13:47:35.199014 2413574 config.go:182] Loaded profile config "functional-085557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 13:47:35.199137 2413574 config.go:182] Loaded profile config "functional-085557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 13:47:35.199645 2413574 cli_runner.go:164] Run: docker container inspect functional-085557 --format={{.State.Status}}
I0923 13:47:35.217567 2413574 ssh_runner.go:195] Run: systemctl --version
I0923 13:47:35.217642 2413574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-085557
I0923 13:47:35.238894 2413574 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35744 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/functional-085557/id_rsa Username:docker}
I0923 13:47:35.346901 2413574 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-085557 image ls --format yaml --alsologtostderr:
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "86930758"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-085557
size: "4788229"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:9f661996f4d1cea788f329b8145660a1124a5a94eec8cea1dba0d564423ad171
repoTags:
- docker.io/library/nginx:latest
size: "197172029"
- id: c04f5fa3cf53ca3c37306b9f8df96b9bcad6b64b24309f82aab3573f445e6efd
repoDigests:
- localhost/minikube-local-cache-test@sha256:9496dd442b428c098580a837e0f4a7c2867c879d350ad306a324bb31bf28b2a4
repoTags:
- localhost/minikube-local-cache-test:functional-085557
size: "3328"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
- registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "92632544"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "95951255"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67007814"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "48375489"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-085557 image ls --format yaml --alsologtostderr:
I0923 13:47:35.085815 2413545 out.go:345] Setting OutFile to fd 1 ...
I0923 13:47:35.090758 2413545 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:47:35.090818 2413545 out.go:358] Setting ErrFile to fd 2...
I0923 13:47:35.090841 2413545 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:47:35.091162 2413545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-2377681/.minikube/bin
I0923 13:47:35.091925 2413545 config.go:182] Loaded profile config "functional-085557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 13:47:35.092115 2413545 config.go:182] Loaded profile config "functional-085557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 13:47:35.092632 2413545 cli_runner.go:164] Run: docker container inspect functional-085557 --format={{.State.Status}}
I0923 13:47:35.110020 2413545 ssh_runner.go:195] Run: systemctl --version
I0923 13:47:35.110080 2413545 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-085557
I0923 13:47:35.139562 2413545 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35744 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/functional-085557/id_rsa Username:docker}
I0923 13:47:35.234846 2413545 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-085557 ssh pgrep buildkitd: exit status 1 (311.312803ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 image build -t localhost/my-image:functional-085557 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-085557 image build -t localhost/my-image:functional-085557 testdata/build --alsologtostderr: (2.92167986s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-085557 image build -t localhost/my-image:functional-085557 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c6db0183fc9
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-085557
--> 7265e2bc159
Successfully tagged localhost/my-image:functional-085557
7265e2bc159ef16a750a1f000387f74b83df123fce340f922a9b838fc9ebe09b
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-085557 image build -t localhost/my-image:functional-085557 testdata/build --alsologtostderr:
I0923 13:47:35.772834 2413705 out.go:345] Setting OutFile to fd 1 ...
I0923 13:47:35.780830 2413705 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:47:35.780856 2413705 out.go:358] Setting ErrFile to fd 2...
I0923 13:47:35.780862 2413705 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 13:47:35.781203 2413705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-2377681/.minikube/bin
I0923 13:47:35.781953 2413705 config.go:182] Loaded profile config "functional-085557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 13:47:35.782624 2413705 config.go:182] Loaded profile config "functional-085557": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 13:47:35.783169 2413705 cli_runner.go:164] Run: docker container inspect functional-085557 --format={{.State.Status}}
I0923 13:47:35.799985 2413705 ssh_runner.go:195] Run: systemctl --version
I0923 13:47:35.800042 2413705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-085557
I0923 13:47:35.816502 2413705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35744 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/functional-085557/id_rsa Username:docker}
I0923 13:47:35.906905 2413705 build_images.go:161] Building image from path: /tmp/build.113054069.tar
I0923 13:47:35.907010 2413705 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0923 13:47:35.917352 2413705 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.113054069.tar
I0923 13:47:35.921323 2413705 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.113054069.tar: stat -c "%s %y" /var/lib/minikube/build/build.113054069.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.113054069.tar': No such file or directory
I0923 13:47:35.921353 2413705 ssh_runner.go:362] scp /tmp/build.113054069.tar --> /var/lib/minikube/build/build.113054069.tar (3072 bytes)
I0923 13:47:35.947261 2413705 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.113054069
I0923 13:47:35.956309 2413705 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.113054069 -xf /var/lib/minikube/build/build.113054069.tar
I0923 13:47:35.965919 2413705 crio.go:315] Building image: /var/lib/minikube/build/build.113054069
I0923 13:47:35.966005 2413705 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-085557 /var/lib/minikube/build/build.113054069 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0923 13:47:38.619465 2413705 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-085557 /var/lib/minikube/build/build.113054069 --cgroup-manager=cgroupfs: (2.653427574s)
I0923 13:47:38.619534 2413705 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.113054069
I0923 13:47:38.628150 2413705 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.113054069.tar
I0923 13:47:38.636706 2413705 build_images.go:217] Built localhost/my-image:functional-085557 from /tmp/build.113054069.tar
I0923 13:47:38.636736 2413705 build_images.go:133] succeeded building to: functional-085557
I0923 13:47:38.636741 2413705 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-085557
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 image load --daemon kicbase/echo-server:functional-085557 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-085557 image load --daemon kicbase/echo-server:functional-085557 --alsologtostderr: (1.144983267s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 image load --daemon kicbase/echo-server:functional-085557 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-085557
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 image load --daemon kicbase/echo-server:functional-085557 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 image save kicbase/echo-server:functional-085557 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 image rm kicbase/echo-server:functional-085557 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 image ls
2024/09/23 13:47:32 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-085557
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 image save --daemon kicbase/echo-server:functional-085557 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-085557
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-085557 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-085557
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-085557
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-085557
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (172.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-952506 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0923 13:48:03.743492 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:48:03.750170 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:48:03.761576 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:48:03.782955 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:48:03.825203 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:48:03.906941 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:48:04.068609 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:48:04.390204 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:48:05.032302 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:48:06.313871 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:48:08.875863 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:48:13.998062 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:48:24.239750 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:48:44.721840 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:49:25.683417 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-952506 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m51.171338926s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (172.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-952506 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-952506 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-952506 -- rollout status deployment/busybox: (5.482400443s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-952506 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-952506 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-952506 -- exec busybox-7dff88458-94cn4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-952506 -- exec busybox-7dff88458-mm8mn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-952506 -- exec busybox-7dff88458-zp5bc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-952506 -- exec busybox-7dff88458-94cn4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-952506 -- exec busybox-7dff88458-mm8mn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-952506 -- exec busybox-7dff88458-zp5bc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-952506 -- exec busybox-7dff88458-94cn4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-952506 -- exec busybox-7dff88458-mm8mn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-952506 -- exec busybox-7dff88458-zp5bc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-952506 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-952506 -- exec busybox-7dff88458-94cn4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-952506 -- exec busybox-7dff88458-94cn4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-952506 -- exec busybox-7dff88458-mm8mn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-952506 -- exec busybox-7dff88458-mm8mn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-952506 -- exec busybox-7dff88458-zp5bc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-952506 -- exec busybox-7dff88458-zp5bc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (31.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-952506 -v=7 --alsologtostderr
E0923 13:50:47.606506 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-952506 -v=7 --alsologtostderr: (30.726994139s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (31.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-952506 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 cp testdata/cp-test.txt ha-952506:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 cp ha-952506:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4226284073/001/cp-test_ha-952506.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 cp ha-952506:/home/docker/cp-test.txt ha-952506-m02:/home/docker/cp-test_ha-952506_ha-952506-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m02 "sudo cat /home/docker/cp-test_ha-952506_ha-952506-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 cp ha-952506:/home/docker/cp-test.txt ha-952506-m03:/home/docker/cp-test_ha-952506_ha-952506-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m03 "sudo cat /home/docker/cp-test_ha-952506_ha-952506-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 cp ha-952506:/home/docker/cp-test.txt ha-952506-m04:/home/docker/cp-test_ha-952506_ha-952506-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m04 "sudo cat /home/docker/cp-test_ha-952506_ha-952506-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 cp testdata/cp-test.txt ha-952506-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 cp ha-952506-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4226284073/001/cp-test_ha-952506-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 cp ha-952506-m02:/home/docker/cp-test.txt ha-952506:/home/docker/cp-test_ha-952506-m02_ha-952506.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506 "sudo cat /home/docker/cp-test_ha-952506-m02_ha-952506.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 cp ha-952506-m02:/home/docker/cp-test.txt ha-952506-m03:/home/docker/cp-test_ha-952506-m02_ha-952506-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m03 "sudo cat /home/docker/cp-test_ha-952506-m02_ha-952506-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 cp ha-952506-m02:/home/docker/cp-test.txt ha-952506-m04:/home/docker/cp-test_ha-952506-m02_ha-952506-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m04 "sudo cat /home/docker/cp-test_ha-952506-m02_ha-952506-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 cp testdata/cp-test.txt ha-952506-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 cp ha-952506-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4226284073/001/cp-test_ha-952506-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 cp ha-952506-m03:/home/docker/cp-test.txt ha-952506:/home/docker/cp-test_ha-952506-m03_ha-952506.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506 "sudo cat /home/docker/cp-test_ha-952506-m03_ha-952506.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 cp ha-952506-m03:/home/docker/cp-test.txt ha-952506-m02:/home/docker/cp-test_ha-952506-m03_ha-952506-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m02 "sudo cat /home/docker/cp-test_ha-952506-m03_ha-952506-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 cp ha-952506-m03:/home/docker/cp-test.txt ha-952506-m04:/home/docker/cp-test_ha-952506-m03_ha-952506-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m04 "sudo cat /home/docker/cp-test_ha-952506-m03_ha-952506-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 cp testdata/cp-test.txt ha-952506-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 cp ha-952506-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4226284073/001/cp-test_ha-952506-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 cp ha-952506-m04:/home/docker/cp-test.txt ha-952506:/home/docker/cp-test_ha-952506-m04_ha-952506.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506 "sudo cat /home/docker/cp-test_ha-952506-m04_ha-952506.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 cp ha-952506-m04:/home/docker/cp-test.txt ha-952506-m02:/home/docker/cp-test_ha-952506-m04_ha-952506-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m02 "sudo cat /home/docker/cp-test_ha-952506-m04_ha-952506-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 cp ha-952506-m04:/home/docker/cp-test.txt ha-952506-m03:/home/docker/cp-test_ha-952506-m04_ha-952506-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 ssh -n ha-952506-m03 "sudo cat /home/docker/cp-test_ha-952506-m04_ha-952506-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 node stop m02 -v=7 --alsologtostderr
E0923 13:51:44.466509 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:51:44.472903 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:51:44.484297 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:51:44.505660 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:51:44.547123 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:51:44.628584 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:51:44.790073 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:51:45.111898 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:51:45.753739 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-952506 node stop m02 -v=7 --alsologtostderr: (11.968945164s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 status -v=7 --alsologtostderr
E0923 13:51:47.036006 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-952506 status -v=7 --alsologtostderr: exit status 7 (716.568122ms)

                                                
                                                
-- stdout --
	ha-952506
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-952506-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-952506-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-952506-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:51:46.404312 2429369 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:51:46.404448 2429369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:51:46.404459 2429369 out.go:358] Setting ErrFile to fd 2...
	I0923 13:51:46.404465 2429369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:51:46.404740 2429369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-2377681/.minikube/bin
	I0923 13:51:46.404928 2429369 out.go:352] Setting JSON to false
	I0923 13:51:46.404978 2429369 mustload.go:65] Loading cluster: ha-952506
	I0923 13:51:46.405064 2429369 notify.go:220] Checking for updates...
	I0923 13:51:46.405388 2429369 config.go:182] Loaded profile config "ha-952506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:51:46.405404 2429369 status.go:174] checking status of ha-952506 ...
	I0923 13:51:46.406289 2429369 cli_runner.go:164] Run: docker container inspect ha-952506 --format={{.State.Status}}
	I0923 13:51:46.427008 2429369 status.go:364] ha-952506 host status = "Running" (err=<nil>)
	I0923 13:51:46.427038 2429369 host.go:66] Checking if "ha-952506" exists ...
	I0923 13:51:46.427359 2429369 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-952506
	I0923 13:51:46.455985 2429369 host.go:66] Checking if "ha-952506" exists ...
	I0923 13:51:46.456293 2429369 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:51:46.456347 2429369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506
	I0923 13:51:46.474802 2429369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35749 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/ha-952506/id_rsa Username:docker}
	I0923 13:51:46.567480 2429369 ssh_runner.go:195] Run: systemctl --version
	I0923 13:51:46.572393 2429369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:51:46.584829 2429369 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 13:51:46.638376 2429369 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-23 13:51:46.62812105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 13:51:46.639039 2429369 kubeconfig.go:125] found "ha-952506" server: "https://192.168.49.254:8443"
	I0923 13:51:46.639078 2429369 api_server.go:166] Checking apiserver status ...
	I0923 13:51:46.639125 2429369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:51:46.649943 2429369 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1356/cgroup
	I0923 13:51:46.659523 2429369 api_server.go:182] apiserver freezer: "8:freezer:/docker/bdebc792e7c455cbc6f78a638b85a950a5e11ee9e984295d53f0bb3d9e2f2bef/crio/crio-0529ccf89b82914e99c1ba06a2044ce90be11200b0bc0c5566959cb1e5bee8b9"
	I0923 13:51:46.659593 2429369 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bdebc792e7c455cbc6f78a638b85a950a5e11ee9e984295d53f0bb3d9e2f2bef/crio/crio-0529ccf89b82914e99c1ba06a2044ce90be11200b0bc0c5566959cb1e5bee8b9/freezer.state
	I0923 13:51:46.668379 2429369 api_server.go:204] freezer state: "THAWED"
	I0923 13:51:46.668412 2429369 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0923 13:51:46.676148 2429369 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0923 13:51:46.676183 2429369 status.go:456] ha-952506 apiserver status = Running (err=<nil>)
	I0923 13:51:46.676196 2429369 status.go:176] ha-952506 status: &{Name:ha-952506 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:51:46.676214 2429369 status.go:174] checking status of ha-952506-m02 ...
	I0923 13:51:46.676562 2429369 cli_runner.go:164] Run: docker container inspect ha-952506-m02 --format={{.State.Status}}
	I0923 13:51:46.698915 2429369 status.go:364] ha-952506-m02 host status = "Stopped" (err=<nil>)
	I0923 13:51:46.698942 2429369 status.go:377] host is not running, skipping remaining checks
	I0923 13:51:46.698949 2429369 status.go:176] ha-952506-m02 status: &{Name:ha-952506-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:51:46.698969 2429369 status.go:174] checking status of ha-952506-m03 ...
	I0923 13:51:46.699285 2429369 cli_runner.go:164] Run: docker container inspect ha-952506-m03 --format={{.State.Status}}
	I0923 13:51:46.715335 2429369 status.go:364] ha-952506-m03 host status = "Running" (err=<nil>)
	I0923 13:51:46.715362 2429369 host.go:66] Checking if "ha-952506-m03" exists ...
	I0923 13:51:46.715669 2429369 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-952506-m03
	I0923 13:51:46.732154 2429369 host.go:66] Checking if "ha-952506-m03" exists ...
	I0923 13:51:46.732469 2429369 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:51:46.732516 2429369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506-m03
	I0923 13:51:46.748908 2429369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35759 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/ha-952506-m03/id_rsa Username:docker}
	I0923 13:51:46.839409 2429369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:51:46.852072 2429369 kubeconfig.go:125] found "ha-952506" server: "https://192.168.49.254:8443"
	I0923 13:51:46.852110 2429369 api_server.go:166] Checking apiserver status ...
	I0923 13:51:46.852158 2429369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:51:46.863423 2429369 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1319/cgroup
	I0923 13:51:46.874630 2429369 api_server.go:182] apiserver freezer: "8:freezer:/docker/8669a8e2a0cf416d7bc74e26948e6bfdf4965dd26b140b5d4fa255ab547b4b0f/crio/crio-5fd14a84ce310ad3b3d95285ba60ae6929ab76702a98c5133a5c78e662d06a81"
	I0923 13:51:46.874802 2429369 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8669a8e2a0cf416d7bc74e26948e6bfdf4965dd26b140b5d4fa255ab547b4b0f/crio/crio-5fd14a84ce310ad3b3d95285ba60ae6929ab76702a98c5133a5c78e662d06a81/freezer.state
	I0923 13:51:46.884364 2429369 api_server.go:204] freezer state: "THAWED"
	I0923 13:51:46.884404 2429369 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0923 13:51:46.892376 2429369 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0923 13:51:46.892407 2429369 status.go:456] ha-952506-m03 apiserver status = Running (err=<nil>)
	I0923 13:51:46.892418 2429369 status.go:176] ha-952506-m03 status: &{Name:ha-952506-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:51:46.892435 2429369 status.go:174] checking status of ha-952506-m04 ...
	I0923 13:51:46.892747 2429369 cli_runner.go:164] Run: docker container inspect ha-952506-m04 --format={{.State.Status}}
	I0923 13:51:46.911603 2429369 status.go:364] ha-952506-m04 host status = "Running" (err=<nil>)
	I0923 13:51:46.911627 2429369 host.go:66] Checking if "ha-952506-m04" exists ...
	I0923 13:51:46.911932 2429369 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-952506-m04
	I0923 13:51:46.929752 2429369 host.go:66] Checking if "ha-952506-m04" exists ...
	I0923 13:51:46.930071 2429369 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:51:46.930118 2429369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-952506-m04
	I0923 13:51:46.947626 2429369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35764 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/ha-952506-m04/id_rsa Username:docker}
	I0923 13:51:47.047267 2429369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:51:47.060627 2429369 status.go:176] ha-952506-m04 status: &{Name:ha-952506-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (23.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 node start m02 -v=7 --alsologtostderr
E0923 13:51:49.598359 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:51:54.720209 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:52:04.961856 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-952506 node start m02 -v=7 --alsologtostderr: (21.546336195s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-952506 status -v=7 --alsologtostderr: (1.39260395s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (23.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.698933509s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (195.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-952506 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-952506 -v=7 --alsologtostderr
E0923 13:52:25.443280 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-952506 -v=7 --alsologtostderr: (37.222888297s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-952506 --wait=true -v=7 --alsologtostderr
E0923 13:53:03.744017 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:53:06.405373 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:53:31.448122 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:54:28.326755 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-952506 --wait=true -v=7 --alsologtostderr: (2m38.585175367s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-952506
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (195.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-952506 node delete m03 -v=7 --alsologtostderr: (11.442235559s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-952506 stop -v=7 --alsologtostderr: (35.641452389s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-952506 status -v=7 --alsologtostderr: exit status 7 (107.144544ms)

                                                
                                                
-- stdout --
	ha-952506
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-952506-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-952506-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:56:17.461155 2443781 out.go:345] Setting OutFile to fd 1 ...
	I0923 13:56:17.461274 2443781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:56:17.461284 2443781 out.go:358] Setting ErrFile to fd 2...
	I0923 13:56:17.461290 2443781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:56:17.461550 2443781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-2377681/.minikube/bin
	I0923 13:56:17.461742 2443781 out.go:352] Setting JSON to false
	I0923 13:56:17.461780 2443781 mustload.go:65] Loading cluster: ha-952506
	I0923 13:56:17.461817 2443781 notify.go:220] Checking for updates...
	I0923 13:56:17.462201 2443781 config.go:182] Loaded profile config "ha-952506": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 13:56:17.462217 2443781 status.go:174] checking status of ha-952506 ...
	I0923 13:56:17.463079 2443781 cli_runner.go:164] Run: docker container inspect ha-952506 --format={{.State.Status}}
	I0923 13:56:17.480919 2443781 status.go:364] ha-952506 host status = "Stopped" (err=<nil>)
	I0923 13:56:17.480943 2443781 status.go:377] host is not running, skipping remaining checks
	I0923 13:56:17.480951 2443781 status.go:176] ha-952506 status: &{Name:ha-952506 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:56:17.480978 2443781 status.go:174] checking status of ha-952506-m02 ...
	I0923 13:56:17.481276 2443781 cli_runner.go:164] Run: docker container inspect ha-952506-m02 --format={{.State.Status}}
	I0923 13:56:17.507095 2443781 status.go:364] ha-952506-m02 host status = "Stopped" (err=<nil>)
	I0923 13:56:17.507121 2443781 status.go:377] host is not running, skipping remaining checks
	I0923 13:56:17.507128 2443781 status.go:176] ha-952506-m02 status: &{Name:ha-952506-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:56:17.507149 2443781 status.go:174] checking status of ha-952506-m04 ...
	I0923 13:56:17.507454 2443781 cli_runner.go:164] Run: docker container inspect ha-952506-m04 --format={{.State.Status}}
	I0923 13:56:17.524777 2443781 status.go:364] ha-952506-m04 host status = "Stopped" (err=<nil>)
	I0923 13:56:17.524800 2443781 status.go:377] host is not running, skipping remaining checks
	I0923 13:56:17.524807 2443781 status.go:176] ha-952506-m04 status: &{Name:ha-952506-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-952506 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-952506 --control-plane -v=7 --alsologtostderr: (1m11.781046386s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-952506 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.99s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-660869 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-660869 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m17.982048951s)
--- PASS: TestJSONOutput/start/Command (77.99s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-660869 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-660869 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-660869 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-660869 --output=json --user=testUser: (5.84742543s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-060148 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-060148 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (84.966072ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"09a4698e-fa0a-4805-953f-e93022d0a086","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-060148] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d929f1dc-b240-4e98-a36e-834d4d130165","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19690"}}
	{"specversion":"1.0","id":"50464347-4797-46be-98be-e0cd13914d25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eeefdd1f-2573-47c8-ab48-f09c8933d187","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19690-2377681/kubeconfig"}}
	{"specversion":"1.0","id":"4c121095-8303-4827-9c77-dfb72e08dbed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-2377681/.minikube"}}
	{"specversion":"1.0","id":"4c1f9c4d-1f2f-4754-ae02-82733f7631e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"afb13bf3-ff6c-424c-8a17-964f3f692ac0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"85e5527b-31bd-4f3f-b93e-1531a72aa916","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-060148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-060148
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.04s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-730429 --network=
E0923 14:01:44.466913 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-730429 --network=: (39.876150839s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-730429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-730429
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-730429: (2.136524838s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.04s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.15s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-924925 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-924925 --network=bridge: (31.201093097s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-924925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-924925
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-924925: (1.925384964s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.15s)

                                                
                                    
x
+
TestKicExistingNetwork (35.6s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0923 14:02:36.047206 2383070 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0923 14:02:36.063374 2383070 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0923 14:02:36.063466 2383070 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0923 14:02:36.063492 2383070 cli_runner.go:164] Run: docker network inspect existing-network
W0923 14:02:36.079576 2383070 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0923 14:02:36.079610 2383070 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0923 14:02:36.079632 2383070 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0923 14:02:36.079741 2383070 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 14:02:36.099630 2383070 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e2123346e879 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:15:aa:4e:69} reservation:<nil>}
I0923 14:02:36.100121 2383070 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400000f600}
I0923 14:02:36.100162 2383070 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0923 14:02:36.100222 2383070 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0923 14:02:36.171199 2383070 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-225209 --network=existing-network
E0923 14:03:03.746561 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-225209 --network=existing-network: (33.555122061s)
helpers_test.go:175: Cleaning up "existing-network-225209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-225209
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-225209: (1.886052021s)
I0923 14:03:11.628268 2383070 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.60s)

                                                
                                    
x
+
TestKicCustomSubnet (31.81s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-639132 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-639132 --subnet=192.168.60.0/24: (29.750340377s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-639132 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-639132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-639132
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-639132: (2.032135828s)
--- PASS: TestKicCustomSubnet (31.81s)

                                                
                                    
x
+
TestKicStaticIP (34.02s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-184714 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-184714 --static-ip=192.168.200.200: (31.854686444s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-184714 ip
helpers_test.go:175: Cleaning up "static-ip-184714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-184714
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-184714: (2.012840291s)
--- PASS: TestKicStaticIP (34.02s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (70.15s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-836049 --driver=docker  --container-runtime=crio
E0923 14:04:26.810748 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-836049 --driver=docker  --container-runtime=crio: (33.709476971s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-838823 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-838823 --driver=docker  --container-runtime=crio: (30.968267359s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-836049
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-838823
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-838823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-838823
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-838823: (1.959781625s)
helpers_test.go:175: Cleaning up "first-836049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-836049
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-836049: (2.215046015s)
--- PASS: TestMinikubeProfile (70.15s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-376202 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-376202 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.473957732s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-376202 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.02s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-378130 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-378130 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.01766457s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-378130 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-376202 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-376202 --alsologtostderr -v=5: (1.615271331s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-378130 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-378130
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-378130: (1.203327923s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.43s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-378130
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-378130: (7.434684226s)
--- PASS: TestMountStart/serial/RestartStopped (8.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-378130 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-591540 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0923 14:06:44.466629 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-591540 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m46.144704604s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591540 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591540 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-591540 -- rollout status deployment/busybox: (5.591028921s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591540 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591540 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591540 -- exec busybox-7dff88458-4xfl6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591540 -- exec busybox-7dff88458-crwld -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591540 -- exec busybox-7dff88458-4xfl6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591540 -- exec busybox-7dff88458-crwld -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591540 -- exec busybox-7dff88458-4xfl6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591540 -- exec busybox-7dff88458-crwld -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.40s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591540 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591540 -- exec busybox-7dff88458-4xfl6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591540 -- exec busybox-7dff88458-4xfl6 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591540 -- exec busybox-7dff88458-crwld -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-591540 -- exec busybox-7dff88458-crwld -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-591540 -v 3 --alsologtostderr
E0923 14:08:03.743423 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:08:07.530044 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-591540 -v 3 --alsologtostderr: (57.73947675s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.41s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-591540 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 cp testdata/cp-test.txt multinode-591540:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 ssh -n multinode-591540 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 cp multinode-591540:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1221956085/001/cp-test_multinode-591540.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 ssh -n multinode-591540 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 cp multinode-591540:/home/docker/cp-test.txt multinode-591540-m02:/home/docker/cp-test_multinode-591540_multinode-591540-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 ssh -n multinode-591540 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 ssh -n multinode-591540-m02 "sudo cat /home/docker/cp-test_multinode-591540_multinode-591540-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 cp multinode-591540:/home/docker/cp-test.txt multinode-591540-m03:/home/docker/cp-test_multinode-591540_multinode-591540-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 ssh -n multinode-591540 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 ssh -n multinode-591540-m03 "sudo cat /home/docker/cp-test_multinode-591540_multinode-591540-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 cp testdata/cp-test.txt multinode-591540-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 ssh -n multinode-591540-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 cp multinode-591540-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1221956085/001/cp-test_multinode-591540-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 ssh -n multinode-591540-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 cp multinode-591540-m02:/home/docker/cp-test.txt multinode-591540:/home/docker/cp-test_multinode-591540-m02_multinode-591540.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 ssh -n multinode-591540-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 ssh -n multinode-591540 "sudo cat /home/docker/cp-test_multinode-591540-m02_multinode-591540.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 cp multinode-591540-m02:/home/docker/cp-test.txt multinode-591540-m03:/home/docker/cp-test_multinode-591540-m02_multinode-591540-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 ssh -n multinode-591540-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 ssh -n multinode-591540-m03 "sudo cat /home/docker/cp-test_multinode-591540-m02_multinode-591540-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 cp testdata/cp-test.txt multinode-591540-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 ssh -n multinode-591540-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 cp multinode-591540-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1221956085/001/cp-test_multinode-591540-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 ssh -n multinode-591540-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 cp multinode-591540-m03:/home/docker/cp-test.txt multinode-591540:/home/docker/cp-test_multinode-591540-m03_multinode-591540.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 ssh -n multinode-591540-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 ssh -n multinode-591540 "sudo cat /home/docker/cp-test_multinode-591540-m03_multinode-591540.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 cp multinode-591540-m03:/home/docker/cp-test.txt multinode-591540-m02:/home/docker/cp-test_multinode-591540-m03_multinode-591540-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 ssh -n multinode-591540-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 ssh -n multinode-591540-m02 "sudo cat /home/docker/cp-test_multinode-591540-m03_multinode-591540-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-591540 node stop m03: (1.20860576s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-591540 status: exit status 7 (514.648179ms)

                                                
                                                
-- stdout --
	multinode-591540
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-591540-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-591540-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-591540 status --alsologtostderr: exit status 7 (485.048872ms)

                                                
                                                
-- stdout --
	multinode-591540
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-591540-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-591540-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 14:09:00.771706 2498343 out.go:345] Setting OutFile to fd 1 ...
	I0923 14:09:00.771917 2498343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 14:09:00.771944 2498343 out.go:358] Setting ErrFile to fd 2...
	I0923 14:09:00.771967 2498343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 14:09:00.772290 2498343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-2377681/.minikube/bin
	I0923 14:09:00.772546 2498343 out.go:352] Setting JSON to false
	I0923 14:09:00.772614 2498343 mustload.go:65] Loading cluster: multinode-591540
	I0923 14:09:00.772707 2498343 notify.go:220] Checking for updates...
	I0923 14:09:00.773157 2498343 config.go:182] Loaded profile config "multinode-591540": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 14:09:00.773311 2498343 status.go:174] checking status of multinode-591540 ...
	I0923 14:09:00.773951 2498343 cli_runner.go:164] Run: docker container inspect multinode-591540 --format={{.State.Status}}
	I0923 14:09:00.792112 2498343 status.go:364] multinode-591540 host status = "Running" (err=<nil>)
	I0923 14:09:00.792135 2498343 host.go:66] Checking if "multinode-591540" exists ...
	I0923 14:09:00.792455 2498343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-591540
	I0923 14:09:00.814178 2498343 host.go:66] Checking if "multinode-591540" exists ...
	I0923 14:09:00.814504 2498343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 14:09:00.814553 2498343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-591540
	I0923 14:09:00.831447 2498343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35869 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/multinode-591540/id_rsa Username:docker}
	I0923 14:09:00.923723 2498343 ssh_runner.go:195] Run: systemctl --version
	I0923 14:09:00.928167 2498343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 14:09:00.940129 2498343 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 14:09:00.992988 2498343 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-23 14:09:00.982901885 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 14:09:00.993597 2498343 kubeconfig.go:125] found "multinode-591540" server: "https://192.168.67.2:8443"
	I0923 14:09:00.993647 2498343 api_server.go:166] Checking apiserver status ...
	I0923 14:09:00.993695 2498343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 14:09:01.005783 2498343 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1413/cgroup
	I0923 14:09:01.016208 2498343 api_server.go:182] apiserver freezer: "8:freezer:/docker/7bd6a9e9247305d0fbf06852d7d283f883f24749c44931122bc5e5bd91c87fee/crio/crio-bd9b8be061250077888ff9b52a31bccc7e69a6c967d46dc108978a721b8881c3"
	I0923 14:09:01.016289 2498343 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7bd6a9e9247305d0fbf06852d7d283f883f24749c44931122bc5e5bd91c87fee/crio/crio-bd9b8be061250077888ff9b52a31bccc7e69a6c967d46dc108978a721b8881c3/freezer.state
	I0923 14:09:01.025678 2498343 api_server.go:204] freezer state: "THAWED"
	I0923 14:09:01.025708 2498343 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0923 14:09:01.033605 2498343 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0923 14:09:01.033646 2498343 status.go:456] multinode-591540 apiserver status = Running (err=<nil>)
	I0923 14:09:01.033658 2498343 status.go:176] multinode-591540 status: &{Name:multinode-591540 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 14:09:01.033679 2498343 status.go:174] checking status of multinode-591540-m02 ...
	I0923 14:09:01.034013 2498343 cli_runner.go:164] Run: docker container inspect multinode-591540-m02 --format={{.State.Status}}
	I0923 14:09:01.050503 2498343 status.go:364] multinode-591540-m02 host status = "Running" (err=<nil>)
	I0923 14:09:01.050533 2498343 host.go:66] Checking if "multinode-591540-m02" exists ...
	I0923 14:09:01.050834 2498343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-591540-m02
	I0923 14:09:01.066808 2498343 host.go:66] Checking if "multinode-591540-m02" exists ...
	I0923 14:09:01.067144 2498343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 14:09:01.067191 2498343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-591540-m02
	I0923 14:09:01.084178 2498343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35874 SSHKeyPath:/home/jenkins/minikube-integration/19690-2377681/.minikube/machines/multinode-591540-m02/id_rsa Username:docker}
	I0923 14:09:01.175507 2498343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 14:09:01.187233 2498343 status.go:176] multinode-591540-m02 status: &{Name:multinode-591540-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0923 14:09:01.187269 2498343 status.go:174] checking status of multinode-591540-m03 ...
	I0923 14:09:01.187577 2498343 cli_runner.go:164] Run: docker container inspect multinode-591540-m03 --format={{.State.Status}}
	I0923 14:09:01.203664 2498343 status.go:364] multinode-591540-m03 host status = "Stopped" (err=<nil>)
	I0923 14:09:01.203688 2498343 status.go:377] host is not running, skipping remaining checks
	I0923 14:09:01.203695 2498343 status.go:176] multinode-591540-m03 status: &{Name:multinode-591540-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-591540 node start m03 -v=7 --alsologtostderr: (8.994921663s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (103.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-591540
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-591540
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-591540: (24.84918707s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-591540 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-591540 --wait=true -v=8 --alsologtostderr: (1m18.084089091s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-591540
--- PASS: TestMultiNode/serial/RestartKeepsNodes (103.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-591540 node delete m03: (4.763661855s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-591540 stop: (23.710681061s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-591540 status: exit status 7 (99.494351ms)

                                                
                                                
-- stdout --
	multinode-591540
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-591540-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-591540 status --alsologtostderr: exit status 7 (99.049365ms)

                                                
                                                
-- stdout --
	multinode-591540
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-591540-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 14:11:23.345583 2506118 out.go:345] Setting OutFile to fd 1 ...
	I0923 14:11:23.345701 2506118 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 14:11:23.345710 2506118 out.go:358] Setting ErrFile to fd 2...
	I0923 14:11:23.345715 2506118 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 14:11:23.345961 2506118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-2377681/.minikube/bin
	I0923 14:11:23.346151 2506118 out.go:352] Setting JSON to false
	I0923 14:11:23.346186 2506118 mustload.go:65] Loading cluster: multinode-591540
	I0923 14:11:23.346283 2506118 notify.go:220] Checking for updates...
	I0923 14:11:23.346634 2506118 config.go:182] Loaded profile config "multinode-591540": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 14:11:23.346650 2506118 status.go:174] checking status of multinode-591540 ...
	I0923 14:11:23.347518 2506118 cli_runner.go:164] Run: docker container inspect multinode-591540 --format={{.State.Status}}
	I0923 14:11:23.367533 2506118 status.go:364] multinode-591540 host status = "Stopped" (err=<nil>)
	I0923 14:11:23.367556 2506118 status.go:377] host is not running, skipping remaining checks
	I0923 14:11:23.367563 2506118 status.go:176] multinode-591540 status: &{Name:multinode-591540 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 14:11:23.367595 2506118 status.go:174] checking status of multinode-591540-m02 ...
	I0923 14:11:23.367929 2506118 cli_runner.go:164] Run: docker container inspect multinode-591540-m02 --format={{.State.Status}}
	I0923 14:11:23.396738 2506118 status.go:364] multinode-591540-m02 host status = "Stopped" (err=<nil>)
	I0923 14:11:23.396760 2506118 status.go:377] host is not running, skipping remaining checks
	I0923 14:11:23.396767 2506118 status.go:176] multinode-591540-m02 status: &{Name:multinode-591540-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-591540 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0923 14:11:44.466090 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-591540 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (51.156792066s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-591540 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-591540
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-591540-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-591540-m02 --driver=docker  --container-runtime=crio: exit status 14 (87.57296ms)

                                                
                                                
-- stdout --
	* [multinode-591540-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-2377681/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-2377681/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-591540-m02' is duplicated with machine name 'multinode-591540-m02' in profile 'multinode-591540'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-591540-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-591540-m03 --driver=docker  --container-runtime=crio: (31.612671497s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-591540
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-591540: exit status 80 (301.856661ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-591540 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-591540-m03 already exists in multinode-591540-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-591540-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-591540-m03: (1.951208298s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.00s)

                                                
                                    
x
+
TestPreload (125.93s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-158367 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0923 14:13:03.743896 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-158367 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m32.486118391s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-158367 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-158367 image pull gcr.io/k8s-minikube/busybox: (3.114090872s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-158367
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-158367: (5.819013462s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-158367 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-158367 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (21.850052222s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-158367 image list
helpers_test.go:175: Cleaning up "test-preload-158367" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-158367
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-158367: (2.373053215s)
--- PASS: TestPreload (125.93s)

                                                
                                    
x
+
TestScheduledStopUnix (105.08s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-450367 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-450367 --memory=2048 --driver=docker  --container-runtime=crio: (29.005633135s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-450367 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-450367 -n scheduled-stop-450367
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-450367 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0923 14:15:28.627733 2383070 retry.go:31] will retry after 124.01µs: open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/scheduled-stop-450367/pid: no such file or directory
I0923 14:15:28.628866 2383070 retry.go:31] will retry after 168.647µs: open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/scheduled-stop-450367/pid: no such file or directory
I0923 14:15:28.629998 2383070 retry.go:31] will retry after 317.925µs: open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/scheduled-stop-450367/pid: no such file or directory
I0923 14:15:28.631084 2383070 retry.go:31] will retry after 416.84µs: open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/scheduled-stop-450367/pid: no such file or directory
I0923 14:15:28.631997 2383070 retry.go:31] will retry after 712.044µs: open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/scheduled-stop-450367/pid: no such file or directory
I0923 14:15:28.633309 2383070 retry.go:31] will retry after 609.545µs: open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/scheduled-stop-450367/pid: no such file or directory
I0923 14:15:28.634460 2383070 retry.go:31] will retry after 694.9µs: open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/scheduled-stop-450367/pid: no such file or directory
I0923 14:15:28.635887 2383070 retry.go:31] will retry after 1.335429ms: open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/scheduled-stop-450367/pid: no such file or directory
I0923 14:15:28.638126 2383070 retry.go:31] will retry after 3.586783ms: open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/scheduled-stop-450367/pid: no such file or directory
I0923 14:15:28.642727 2383070 retry.go:31] will retry after 2.555712ms: open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/scheduled-stop-450367/pid: no such file or directory
I0923 14:15:28.645456 2383070 retry.go:31] will retry after 7.39859ms: open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/scheduled-stop-450367/pid: no such file or directory
I0923 14:15:28.654007 2383070 retry.go:31] will retry after 10.167633ms: open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/scheduled-stop-450367/pid: no such file or directory
I0923 14:15:28.665253 2383070 retry.go:31] will retry after 11.802702ms: open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/scheduled-stop-450367/pid: no such file or directory
I0923 14:15:28.677570 2383070 retry.go:31] will retry after 21.797145ms: open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/scheduled-stop-450367/pid: no such file or directory
I0923 14:15:28.699811 2383070 retry.go:31] will retry after 19.149952ms: open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/scheduled-stop-450367/pid: no such file or directory
I0923 14:15:28.720076 2383070 retry.go:31] will retry after 54.661632ms: open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/scheduled-stop-450367/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-450367 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-450367 -n scheduled-stop-450367
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-450367
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-450367 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-450367
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-450367: exit status 7 (71.109955ms)

                                                
                                                
-- stdout --
	scheduled-stop-450367
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-450367 -n scheduled-stop-450367
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-450367 -n scheduled-stop-450367: exit status 7 (66.657612ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-450367" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-450367
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-450367: (4.522334294s)
--- PASS: TestScheduledStopUnix (105.08s)

                                                
                                    
x
+
TestInsufficientStorage (10.3s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-909918 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
E0923 14:16:44.466669 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-909918 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.831522245s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"398e9c0d-0b5d-4446-b5f8-552bf1a72713","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-909918] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a65ad3e3-314b-4be8-9238-e3622184b9ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19690"}}
	{"specversion":"1.0","id":"e5357891-bbe8-4dec-8b7e-4636113d8da1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5e1ed62d-fe3c-4bcf-89fd-b0631226ad6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19690-2377681/kubeconfig"}}
	{"specversion":"1.0","id":"d53273c1-7819-455c-82c1-2a9197434dfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-2377681/.minikube"}}
	{"specversion":"1.0","id":"76cdde78-3318-4113-88c3-6ede3725b920","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a2878d48-a0ea-4023-a255-7d50f841dde0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"df18ff01-3d35-4220-8878-200447f73f81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"04fb940c-bd9a-4b5c-85cd-0db8468d6063","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2cc82010-1326-4f7d-b8ec-23f86466fb2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6fe1ee40-2e7a-431b-90e9-9d8016eef38c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"3ca55082-25d5-4e45-b8e8-7e22001e2ed9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-909918\" primary control-plane node in \"insufficient-storage-909918\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0bddd27f-7342-4640-b2b8-83362783b582","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726784731-19672 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"808a6f6d-2001-4187-a885-34481b94e557","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c395965-4b7e-4885-aab3-0fd1cef0dcec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-909918 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-909918 --output=json --layout=cluster: exit status 7 (292.931536ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-909918","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-909918","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 14:16:52.307314 2523755 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-909918" does not appear in /home/jenkins/minikube-integration/19690-2377681/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-909918 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-909918 --output=json --layout=cluster: exit status 7 (275.313424ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-909918","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-909918","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 14:16:52.584968 2523815 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-909918" does not appear in /home/jenkins/minikube-integration/19690-2377681/kubeconfig
	E0923 14:16:52.595585 2523815 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/insufficient-storage-909918/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-909918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-909918
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-909918: (1.901240085s)
--- PASS: TestInsufficientStorage (10.30s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (90.31s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1101217088 start -p running-upgrade-066715 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0923 14:21:44.466857 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1101217088 start -p running-upgrade-066715 --memory=2200 --vm-driver=docker  --container-runtime=crio: (37.735656079s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-066715 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-066715 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.093830287s)
helpers_test.go:175: Cleaning up "running-upgrade-066715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-066715
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-066715: (2.75171741s)
--- PASS: TestRunningBinaryUpgrade (90.31s)

                                                
                                    
x
+
TestKubernetesUpgrade (405.83s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-407886 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-407886 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m11.583221649s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-407886
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-407886: (1.302344796s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-407886 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-407886 status --format={{.Host}}: exit status 7 (85.880291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-407886 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-407886 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m38.304290814s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-407886 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-407886 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-407886 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (156.715511ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-407886] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-2377681/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-2377681/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-407886
	    minikube start -p kubernetes-upgrade-407886 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4078862 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-407886 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-407886 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-407886 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.696467056s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-407886" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-407886
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-407886: (2.538577582s)
--- PASS: TestKubernetesUpgrade (405.83s)

                                                
                                    
x
+
TestMissingContainerUpgrade (168.67s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2653026147 start -p missing-upgrade-788730 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2653026147 start -p missing-upgrade-788730 --memory=2200 --driver=docker  --container-runtime=crio: (1m27.970828382s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-788730
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-788730: (12.949444015s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-788730
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-788730 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-788730 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m4.387710761s)
helpers_test.go:175: Cleaning up "missing-upgrade-788730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-788730
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-788730: (2.186218122s)
--- PASS: TestMissingContainerUpgrade (168.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-860827 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-860827 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (86.936537ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-860827] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-2377681/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-2377681/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-860827 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-860827 --driver=docker  --container-runtime=crio: (37.760049296s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-860827 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (30.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-860827 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-860827 --no-kubernetes --driver=docker  --container-runtime=crio: (28.511633948s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-860827 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-860827 status -o json: exit status 2 (363.580802ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-860827","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-860827
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-860827: (2.048385497s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (30.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-860827 --no-kubernetes --driver=docker  --container-runtime=crio
E0923 14:18:03.743036 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-860827 --no-kubernetes --driver=docker  --container-runtime=crio: (10.339016699s)
--- PASS: TestNoKubernetes/serial/Start (10.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-860827 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-860827 "sudo systemctl is-active --quiet service kubelet": exit status 1 (391.774321ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (3.86009547s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (4.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-860827
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-860827: (1.269698053s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-860827 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-860827 --driver=docker  --container-runtime=crio: (7.0098401s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-860827 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-860827 "sudo systemctl is-active --quiet service kubelet": exit status 1 (262.753802ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (90.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.65814702 start -p stopped-upgrade-240719 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.65814702 start -p stopped-upgrade-240719 --memory=2200 --vm-driver=docker  --container-runtime=crio: (40.667948705s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.65814702 -p stopped-upgrade-240719 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.65814702 -p stopped-upgrade-240719 stop: (2.513254619s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-240719 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0923 14:21:06.812076 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-240719 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.932560274s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (90.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-240719
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-240719: (1.103800199s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.10s)

                                                
                                    
x
+
TestPause/serial/Start (81.31s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-096861 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0923 14:23:03.743519 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-096861 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m21.310113063s)
--- PASS: TestPause/serial/Start (81.31s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.65s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-096861 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-096861 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.637644034s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.65s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-096861 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-096861 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-096861 --output=json --layout=cluster: exit status 2 (306.613772ms)

                                                
                                                
-- stdout --
	{"Name":"pause-096861","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-096861","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-096861 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-096861 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.65s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-096861 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-096861 --alsologtostderr -v=5: (2.647156059s)
--- PASS: TestPause/serial/DeletePaused (2.65s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (12.85s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0923 14:24:47.533098 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (12.791701332s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-096861
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-096861: exit status 1 (21.119538ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-096861: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (12.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-065741 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-065741 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (211.797464ms)

                                                
                                                
-- stdout --
	* [false-065741] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-2377681/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-2377681/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 14:25:19.044853 2564125 out.go:345] Setting OutFile to fd 1 ...
	I0923 14:25:19.045082 2564125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 14:25:19.045104 2564125 out.go:358] Setting ErrFile to fd 2...
	I0923 14:25:19.045125 2564125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 14:25:19.045375 2564125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-2377681/.minikube/bin
	I0923 14:25:19.045786 2564125 out.go:352] Setting JSON to false
	I0923 14:25:19.046773 2564125 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":58062,"bootTime":1727043457,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0923 14:25:19.046862 2564125 start.go:139] virtualization:  
	I0923 14:25:19.048943 2564125 out.go:177] * [false-065741] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0923 14:25:19.050411 2564125 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 14:25:19.050484 2564125 notify.go:220] Checking for updates...
	I0923 14:25:19.052439 2564125 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 14:25:19.054052 2564125 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-2377681/kubeconfig
	I0923 14:25:19.055854 2564125 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-2377681/.minikube
	I0923 14:25:19.057214 2564125 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0923 14:25:19.058409 2564125 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 14:25:19.060202 2564125 config.go:182] Loaded profile config "force-systemd-flag-380959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 14:25:19.060360 2564125 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 14:25:19.100874 2564125 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 14:25:19.100993 2564125 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 14:25:19.178430 2564125 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-23 14:25:19.165716443 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0923 14:25:19.178555 2564125 docker.go:318] overlay module found
	I0923 14:25:19.180095 2564125 out.go:177] * Using the docker driver based on user configuration
	I0923 14:25:19.181382 2564125 start.go:297] selected driver: docker
	I0923 14:25:19.181395 2564125 start.go:901] validating driver "docker" against <nil>
	I0923 14:25:19.181408 2564125 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 14:25:19.183137 2564125 out.go:201] 
	W0923 14:25:19.184758 2564125 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0923 14:25:19.186491 2564125 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-065741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-065741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-065741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-065741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-065741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-065741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-065741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-065741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-065741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-065741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-065741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-065741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-065741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-065741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-065741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-065741"

                                                
                                                
----------------------- debugLogs end: false-065741 [took: 3.883013775s] --------------------------------
helpers_test.go:175: Cleaning up "false-065741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-065741
--- PASS: TestNetworkPlugins/group/false (4.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (184.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-590909 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0923 14:28:03.743881 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-590909 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (3m4.903424926s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (184.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-118313 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-118313 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m20.811313394s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-590909 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d8b2c488-6d97-4080-aaef-cdab3f73c9db] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d8b2c488-6d97-4080-aaef-cdab3f73c9db] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.004404704s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-590909 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-590909 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-590909 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.325461924s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-590909 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-590909 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-590909 --alsologtostderr -v=3: (12.404902259s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-590909 -n old-k8s-version-590909
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-590909 -n old-k8s-version-590909: exit status 7 (69.600305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-590909 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (135.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-590909 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-590909 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m15.327038705s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-590909 -n old-k8s-version-590909
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (135.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-118313 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [03769b0c-b5a7-4e96-837b-73e8d816f0d3] Pending
helpers_test.go:344: "busybox" [03769b0c-b5a7-4e96-837b-73e8d816f0d3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [03769b0c-b5a7-4e96-837b-73e8d816f0d3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00429952s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-118313 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-118313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-118313 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.014330565s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-118313 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-118313 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-118313 --alsologtostderr -v=3: (11.973617382s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-118313 -n default-k8s-diff-port-118313
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-118313 -n default-k8s-diff-port-118313: exit status 7 (68.960259ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-118313 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-118313 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0923 14:31:44.466496 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-118313 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m26.216385638s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-118313 -n default-k8s-diff-port-118313
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bxffb" [298950f0-bfbf-4ecb-a3aa-39bdb7cc093d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004895765s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bxffb" [298950f0-bfbf-4ecb-a3aa-39bdb7cc093d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004986433s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-590909 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-590909 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-590909 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-590909 -n old-k8s-version-590909
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-590909 -n old-k8s-version-590909: exit status 2 (327.435033ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-590909 -n old-k8s-version-590909
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-590909 -n old-k8s-version-590909: exit status 2 (308.517603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-590909 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-590909 -n old-k8s-version-590909
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-590909 -n old-k8s-version-590909
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (74.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-661412 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0923 14:33:03.743458 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-661412 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m14.160640412s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (74.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-661412 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e7b99744-ebaf-43b1-8a58-a907924969aa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e7b99744-ebaf-43b1-8a58-a907924969aa] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003604467s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-661412 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-661412 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-661412 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.001204339s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-661412 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-661412 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-661412 --alsologtostderr -v=3: (11.969512709s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-661412 -n embed-certs-661412
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-661412 -n embed-certs-661412: exit status 7 (79.682598ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-661412 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (277.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-661412 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0923 14:34:51.956681 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/old-k8s-version-590909/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:34:51.963013 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/old-k8s-version-590909/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:34:51.974384 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/old-k8s-version-590909/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:34:51.995765 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/old-k8s-version-590909/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:34:52.037126 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/old-k8s-version-590909/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:34:52.118527 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/old-k8s-version-590909/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:34:52.279995 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/old-k8s-version-590909/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:34:52.601322 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/old-k8s-version-590909/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:34:53.242627 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/old-k8s-version-590909/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:34:54.524596 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/old-k8s-version-590909/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:34:57.086037 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/old-k8s-version-590909/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:35:02.207815 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/old-k8s-version-590909/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:35:12.449386 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/old-k8s-version-590909/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:35:32.931145 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/old-k8s-version-590909/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-661412 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m37.270840887s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-661412 -n embed-certs-661412
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (277.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-k4wkg" [b3096aa4-fc57-4991-be91-4ede5ffef7e8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00422306s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-k4wkg" [b3096aa4-fc57-4991-be91-4ede5ffef7e8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004112124s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-118313 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-118313 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-118313 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-118313 -n default-k8s-diff-port-118313
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-118313 -n default-k8s-diff-port-118313: exit status 2 (315.368168ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-118313 -n default-k8s-diff-port-118313
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-118313 -n default-k8s-diff-port-118313: exit status 2 (315.438238ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-118313 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-118313 -n default-k8s-diff-port-118313
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-118313 -n default-k8s-diff-port-118313
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (61.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-415324 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0923 14:36:13.892621 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/old-k8s-version-590909/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:36:44.466344 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-415324 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m1.356076465s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (61.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-415324 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [93c157c4-2aca-42cb-9a14-a5110c717c5b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [93c157c4-2aca-42cb-9a14-a5110c717c5b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004239811s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-415324 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-415324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-415324 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-415324 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-415324 --alsologtostderr -v=3: (12.008104972s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-415324 -n no-preload-415324
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-415324 -n no-preload-415324: exit status 7 (74.044783ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-415324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (289.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-415324 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0923 14:37:35.814473 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/old-k8s-version-590909/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:37:46.814105 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:38:03.744145 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-415324 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m48.838960798s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-415324 -n no-preload-415324
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (289.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-pdzrb" [29f797c3-edef-462c-accd-30936bd30a93] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003967382s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-pdzrb" [29f797c3-edef-462c-accd-30936bd30a93] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004330127s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-661412 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-661412 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-661412 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-661412 -n embed-certs-661412
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-661412 -n embed-certs-661412: exit status 2 (308.519792ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-661412 -n embed-certs-661412
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-661412 -n embed-certs-661412: exit status 2 (327.915026ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-661412 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-661412 -n embed-certs-661412
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-661412 -n embed-certs-661412
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-348819 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0923 14:39:51.956735 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/old-k8s-version-590909/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-348819 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (36.655934415s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-348819 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-348819 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.045202629s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-348819 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-348819 --alsologtostderr -v=3: (1.260857045s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-348819 -n newest-cni-348819
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-348819 -n newest-cni-348819: exit status 7 (82.294525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-348819 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-348819 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-348819 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (15.44688268s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-348819 -n newest-cni-348819
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-348819 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-348819 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-348819 -n newest-cni-348819
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-348819 -n newest-cni-348819: exit status 2 (324.11598ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-348819 -n newest-cni-348819
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-348819 -n newest-cni-348819: exit status 2 (329.924599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-348819 --alsologtostderr -v=1
E0923 14:40:19.656083 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/old-k8s-version-590909/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-348819 -n newest-cni-348819
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-348819 -n newest-cni-348819
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (80.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-065741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0923 14:41:00.924601 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/default-k8s-diff-port-118313/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:41:00.930942 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/default-k8s-diff-port-118313/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:41:00.942211 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/default-k8s-diff-port-118313/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:41:00.963584 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/default-k8s-diff-port-118313/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:41:01.004970 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/default-k8s-diff-port-118313/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:41:01.086425 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/default-k8s-diff-port-118313/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:41:01.247871 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/default-k8s-diff-port-118313/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:41:01.569355 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/default-k8s-diff-port-118313/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:41:02.211477 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/default-k8s-diff-port-118313/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:41:03.493083 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/default-k8s-diff-port-118313/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:41:06.054564 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/default-k8s-diff-port-118313/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:41:11.176293 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/default-k8s-diff-port-118313/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:41:21.418246 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/default-k8s-diff-port-118313/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:41:27.535189 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:41:41.900462 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/default-k8s-diff-port-118313/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-065741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m20.913525509s)
--- PASS: TestNetworkPlugins/group/auto/Start (80.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-065741 "pgrep -a kubelet"
E0923 14:41:44.466134 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
I0923 14:41:44.540260 2383070 config.go:182] Loaded profile config "auto-065741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-065741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rdxmc" [4854a944-03d4-42d3-8e8a-4f72d072784b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rdxmc" [4854a944-03d4-42d3-8e8a-4f72d072784b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004739757s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-065741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-065741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-065741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-065741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-065741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m19.684885501s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9rnbg" [acbc84c0-dbab-4ff9-a92a-34d6940f4272] Running
E0923 14:42:22.861947 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/default-k8s-diff-port-118313/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00542336s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9rnbg" [acbc84c0-dbab-4ff9-a92a-34d6940f4272] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005052534s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-415324 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-415324 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-415324 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-415324 -n no-preload-415324
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-415324 -n no-preload-415324: exit status 2 (343.700805ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-415324 -n no-preload-415324
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-415324 -n no-preload-415324: exit status 2 (477.712648ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-415324 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-415324 --alsologtostderr -v=1: (1.156367736s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-415324 -n no-preload-415324
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-415324 -n no-preload-415324
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.14s)
E0923 14:47:50.725009 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/no-preload-415324/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (62.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-065741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0923 14:43:03.743108 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/addons-133262/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-065741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m2.478647636s)
--- PASS: TestNetworkPlugins/group/calico/Start (62.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-jwwnj" [5442b27d-6e22-4633-a518-a52a3f7eed0c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00411107s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-065741 "pgrep -a kubelet"
I0923 14:43:41.556137 2383070 config.go:182] Loaded profile config "kindnet-065741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-065741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4wst7" [17b38685-2b37-4021-a750-ce2590a2ae94] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4wst7" [17b38685-2b37-4021-a750-ce2590a2ae94] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.003070387s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-qmrnn" [89fb59e5-ac32-4a4f-b250-9d03817c71ec] Running
E0923 14:43:44.784182 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/default-k8s-diff-port-118313/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004624026s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-065741 "pgrep -a kubelet"
I0923 14:43:50.647351 2383070 config.go:182] Loaded profile config "calico-065741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-065741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ncvpc" [8bb35537-c2b1-4769-a80d-991fee0681ef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ncvpc" [8bb35537-c2b1-4769-a80d-991fee0681ef] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004450123s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-065741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-065741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-065741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-065741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-065741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-065741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-065741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-065741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (55.558411618s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (50.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-065741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0923 14:44:51.956650 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/old-k8s-version-590909/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-065741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (50.904529261s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (50.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-065741 "pgrep -a kubelet"
I0923 14:45:15.614281 2383070 config.go:182] Loaded profile config "custom-flannel-065741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-065741 replace --force -f testdata/netcat-deployment.yaml
I0923 14:45:15.879618 2383070 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dvxrs" [ca143f05-e001-4add-8d9d-b607407e3bda] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dvxrs" [ca143f05-e001-4add-8d9d-b607407e3bda] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003642058s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-065741 "pgrep -a kubelet"
I0923 14:45:19.792215 2383070 config.go:182] Loaded profile config "enable-default-cni-065741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-065741 replace --force -f testdata/netcat-deployment.yaml
I0923 14:45:20.119242 2383070 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dkjcs" [2936fc96-a59e-4e2e-b8fc-351b87ebc974] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dkjcs" [2936fc96-a59e-4e2e-b8fc-351b87ebc974] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004704332s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-065741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-065741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-065741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-065741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-065741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-065741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-065741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-065741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m6.506946191s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-065741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0923 14:46:00.924628 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/default-k8s-diff-port-118313/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:46:28.626430 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/default-k8s-diff-port-118313/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:46:44.465880 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/functional-085557/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:46:44.822925 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/auto-065741/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:46:44.829609 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/auto-065741/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:46:44.841153 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/auto-065741/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:46:44.862531 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/auto-065741/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:46:44.904058 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/auto-065741/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:46:44.985593 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/auto-065741/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:46:45.147418 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/auto-065741/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:46:45.469407 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/auto-065741/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:46:46.111536 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/auto-065741/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:46:47.393657 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/auto-065741/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:46:49.955054 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/auto-065741/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:46:55.077292 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/auto-065741/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-065741 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m27.11042948s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nkpj2" [23fa8e9c-6f7c-4c4b-89af-e0076a9bc629] Running
E0923 14:47:05.319274 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/auto-065741/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00375919s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-065741 "pgrep -a kubelet"
I0923 14:47:05.592684 2383070 config.go:182] Loaded profile config "flannel-065741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-065741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qp8r6" [cdba573c-8916-40b9-afed-fcde9c38afdb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 14:47:09.747609 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/no-preload-415324/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:47:09.754359 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/no-preload-415324/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:47:09.766146 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/no-preload-415324/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:47:09.787501 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/no-preload-415324/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:47:09.828846 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/no-preload-415324/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:47:09.910876 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/no-preload-415324/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:47:10.072336 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/no-preload-415324/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:47:10.393869 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/no-preload-415324/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:47:11.035203 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/no-preload-415324/client.crt: no such file or directory" logger="UnhandledError"
E0923 14:47:12.316480 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/no-preload-415324/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-qp8r6" [cdba573c-8916-40b9-afed-fcde9c38afdb] Running
E0923 14:47:14.877910 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/no-preload-415324/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.003945055s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-065741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-065741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-065741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-065741 "pgrep -a kubelet"
I0923 14:47:21.836342 2383070 config.go:182] Loaded profile config "bridge-065741": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-065741 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-428nn" [3182cf7e-6255-4e71-9f5b-36e16dfb2372] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 14:47:25.801069 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/auto-065741/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-428nn" [3182cf7e-6255-4e71-9f5b-36e16dfb2372] Running
E0923 14:47:30.242848 2383070 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-2377681/.minikube/profiles/no-preload-415324/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.009672028s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-065741 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-065741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-065741 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    

Test skip (29/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.53s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-237977 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-237977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-237977
--- SKIP: TestDownloadOnlyKic (0.53s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:817: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-452190" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-452190
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-065741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-065741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-065741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-065741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-065741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-065741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-065741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-065741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-065741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-065741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-065741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-065741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-065741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-065741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-065741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-065741"

                                                
                                                
----------------------- debugLogs end: kubenet-065741 [took: 3.97375745s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-065741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-065741
--- SKIP: TestNetworkPlugins/group/kubenet (4.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-065741 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-065741

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-065741

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-065741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-065741

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-065741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-065741

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-065741

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-065741

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-065741

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-065741

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-065741

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-065741" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-065741

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-065741

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-065741

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-065741

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-065741" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-065741" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-065741

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-065741" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-065741"

                                                
                                                
----------------------- debugLogs end: cilium-065741 [took: 5.526520708s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-065741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-065741
--- SKIP: TestNetworkPlugins/group/cilium (5.76s)

                                                
                                    
Copied to clipboard